Category Archives

15 Articles

Week #6 Response/Week #6 Response

Week 6 Response

Posted by mohamed layachi on

The Offspring is a quite interesting episode about the development of a new artificial intelligent life created by Data who itself is the first android in the Star Trek series. He creates what he calls Lal from a series of neural network copies of his system and knowledge database. Lal is referred to as his child as Data feels that he is passing on his legacy and uniqueness as a species but Picard feels this was a grave mistake. Creating a new life form bothers him because Data was once a unique being and now that he has created life it brings up a slew of ethical questions. The step forward in technological AI development is what worries the admiral of the Starfleet because it is a huge stride in the field. This stride has been done in secret by Data and it primarily worries the Starfleet that this transition will end their control over all technological development. More importantly they understand how dangerous it can be if there are more androids which can be stolen by enemies who may use them for destruction. This relates so clearly with how technology is developing at this very moment in time. Artificial intelligence is highly researched and developed but where can a society begin to draw the line on the ethics of the matter. How can we teach a system to think for itself and expect it to abide by our strict system of behavioral mannerisms and ethical judgments. The more developed as a society the more control we lose on the things we create. In my field AI seeks to help aid in the processing needs we require to read and analyze data. There are many things that programs aid us in completing exponentially faster and ultimately i can see a future where AI should focus on improving those practices. It worries me that AI could become destructive that is why it should be monitored closely so that it functions within the parameters of what we require it to do.

Week #6 Response/Week #6 Response

Week#6 Response#5

Posted by Sameer Kunwor on

“The Offspring” From Star Trek

Artificial intelligence and artificial life have long been a controversial topic among scientists, psychologists, and many others who study human life and development. The main issue is differentiating artificial life from natural life, particularly in the case of human beings. Regarding artificial life, we often come to the dilemma of where artificial life ends, and where natural life begins. If something is sentient is it alive? Is a robot that can effectively communicate with a human being alive? Or are there other factors?

“The offspring” is the name of the star trek the next generation episode, in which Data, an android aboard the enterprise, creates an artificial being. Data created the offspring and programmed it with the ability to act almost human, much like Data. Data also considered the creation to be his child, and when confronted about it he claimed that he had as much right to “procreate” as the human members of the enterprise did.

Data creates his offspring in the belief that if any the members of the crew had decided to have a child, they would likely have not been met with any opposition.  When confronted about the creation of his “child” he claims that once he had been accepted as a sentient member of the crew, he had also gained the right to “procreate,” however since he cannot procreate in the fashion that humans can, he just created his own child in the fashion that he could.

Lal is interrogated by Haftel later in the episode. During this interrogation scene, she begins to showcase more of her ability to have emotions. She expresses a strong desire to stay on the enterprise with Data, whom she calls her father, and the rest of the crew whom she is friends with. Although Haftel is too close minded to accept that a robot can display these kinds of emotions, it is very evident that Lal is capable of feelings. This is confirmed in the scene directly after when Lal visits Troi in his room after being interrogated. A clearly confused and distraught Lal is soon overwhelmed by the emotions that she is feeling and returns to Data’s lab. These few scenes in the episode play a role in showing how thin the line between real human emotions and “artificial” emotions can be. If this event were to happen, it would be difficult to argue that Lal would not be able to pass as a human being.

Although humans today have not been able to create a robot as sophisticated as Lal, it is only a matter of time before technology catches up to the imaginations of human beings. When it does, the philosophical line between artificial life and real life will become blurred. It will be impossible to break the scientific barrier as robots will probably never be made up of cells, but it will be difficult to look at a robot with these emotional capabilities and not at least regard it as a human being.

Week #6 Response/Week #6 Response

week 6

Posted by Gabriel Almonte on

Gabriel Almonte 

 

The main point of this episode was to challenge the ethical beliefs of technological fields. Data is a robot created by the company. He then creates another robotic being like him, has feelings, and many other human-like characteristics but wasn’t conceived the same ways as humans. Data considered it as his offspring, he wanted the offspring to pick a gender it had four options and chose to be a human female, he named her Lai. Tensions grew as people had disagreements with what to do with Lai. Data already viewed it as a child, so he grew strong loving feelings for it already People in the company were mad that he created Lai without informing anyone, he then argued that it wouldn’t be a problem if a human created an offspring in private. Other people in the company believed that Lai should be taken to a scientific lab to be tested on to see what the possible outcomes are after she is evaluated. Lai is interviewed where she gains a sense of fear because she thinks something bad may happen to her. To me that is enough to not test her and let her live as a human. However, the company still believes she belongs in a scientific facility. Then everyone finds out that Lai left but later comes back because she malfunctioned and is programmed to come back to Data when she does malfunction. They find out that they must work fast, or she will die, then others start to help out data but, in the end, they had no success and Lai died. Data forgave everyone who disagreed because people who didn’t think of her as human helped another parent when he needed help. If a technological being is behaving similar to a human it should be protected as such.  

Week #6 Response/Week #6 Response

Reem’s Week #6 response

Posted by Reem Malek on

This episode of Star Trek is about a human android,Lal, that was made by Data, a fellow android. Data recognizes the android as his own birthed child, and the android even calls Data dad. Lal begins learning human practices by watching the team on the ship. At last, an issue in Lals brain and makes the android glitch. Data eventually discovers that he couldn’t spare Lal so he deactivated the android. Lal was winding up all the more a human qualities by communicating the love she has towards her dad Data. This raises the topic of whether it was moral to deactivate Lai. Making an android like Lai could be extremely hazardous and it raises the inquiry whether it is moral to make such creation. I am studying electrical building and one piece of the IEE code of morals is to not make whatever future destructive to society. In spite of the fact that the android was not a peril the general public, one day somebody could make an AI android that could assume control over the world. I trust that as long as you realize what you’re doing and have the experience, it is moral to make an android like Lai. In spite of the fact that Data was not able spare Lai, the production of Lai is vital for the progression of innovation. It very well may be said that Lal is the prodigy of Data since he was the one that made Lal. It very well may be additionally that the child lives with the parent until they can fight for themselves. The parent shows them everything about existence and how the world functions, anyway the commander does not think so. He is attempting to break this bond between the parent and the child and I trust that that isn’t moral. He has no option to isolate them as they are associated and he definitely should be understanding on this subject since he is a father himself. If androids somehow happened to duplicate themselves than the world may have a new population of a different breed.

Week #6 Response/Week #6 Response

Geetangalie’s Week #6 Response

Posted by Geetangalie Goberdan on

In Star Trek: The Next Generation episode, “The Offspring,” we learn of a new artificial intelligence aboard the Starfleet enterprise. During his off time, Data creates an Android based on his own structure. He names this creation Lal and claims it as his own child. At the beginning of the episode, the AI is gender neutral and lacks basic skills. Data taking on the role as father allows Lal to choose its gender and we get to see it choose female from the thousands of choices. At this point, Data and Lal are ready to begin Lal’s training in social skills at the most elementary level. Lal has many difficulties with simple tasks and tends to be socially awkward, this does not discourage her from continuing to better herself.

The problem first raised was when Captain Picard initially finds out about Data creating Lal in secrecy and furthermore the fact that he has named her his child. Captain Picard believed it was completely wrong, as he sees Lal as an invention, not a child. The captain later drops this mentality as Deanna Troi presents the question of “Why should biology rather than technology determine whether it is a child?”

Upon finding out of Lal, the head of Starfleet decides it is right for them to obtain ownership of the creation in which they can also observe and experiment with her. Data was totally opposed to this as Lal is his daughter and believes he should be the one raising her. In the end, while the intense debate was occurring over who should keep Lal, they get a call about Lal’s health. Lal turns out to be unsalvageable and she must be turned off.

Ethics in the medical field is a broad topic as doctors are faced to make ethical decisions every day. One common example is when dealing with patients who cannot afford medical assistance, whether it be their medications or scans. Technology plays a huge role in medicine as it is used to do what the human cannot, in the case of efficiency or ability among others.

Week #6 Response/Week #6 Response

Common #6

Posted by Weijun Huang on

In the Star Trek: The Next Generation Season 3 Episode 16, “The Offspring”, Data create a humanoid robot named Lai, and then created him according to his own structural design and federal techniques and wrote it as his own child. Then Data encouraged him to interact with other crew members to learn about human customs. I’m a computer science major. I am shocked by the Lai created by Data, because in the movie, the performance of Lai is more intelligent and thoughtful than that of Data, and the appearance of Lai is the same as that of human beings. This means that Data is smarter than the crew at creating robots than the people who created Data. In other words, if Data wants to gain freedom, it will be difficult for the crew to control him. About Tech, in real life, I can’t imagine how complex code is needed to create a robot like Lai. For me, some relatively basic codes have already taken several hours to think and write, which is a technology that we cannot consider for the time being. But despite that, if this technology is available and used, it can make our world more convenient, and in terms of medicine, more diseases will be treated.

In my opinion, I don’t support the presence of robots holding emotions. I have seen many films about the struggle between humans and robots, in which humans think that robots are under their control, but in fact, some robots have already awakened, and they long for freedom, they think they are smarter than humans and should not be under their control. It resulted in heavy human casualties.

Assignments/Week #6 Response

Sambeg’s Weekly Response#6

Posted by S Raj on

Sambeg Raj Subedi
ENGL 21007-S
Prof. Jesse Rice-Evans
Weekly Assign#6
03/11/2019

Star Trek: The Next Generation Season 3 Episode 16, “The Offspring” was one of the best episodes I had watched so far. This episode was mainly about Lal, daughter of Data. Data, who itself was an android created a robot which was identical to him in term of memory, capacity, and power. Data considered himself as a father of Lal. He allowed Lal to choose her sexuality, where Lal chooses to be a girl. Though Lal had a positronic brain, she was curious to know about everything just like the human baby. Data really did a good job in guiding her to get socialized and learn new things. She was kept in a bar as a waiter so that she could understand human behaviors and habits. Captain Picard was initially unhappy with the Data’s experiment, but later Data was able to convince him. Admiral from Star Fleet wanted to take her away so that she could get more exposure, but Data strongly refused his proposal. As a result, Admiral decided to meet them in person and evaluating her progress, he would make a final decision. He was initially not that impressed but at the end, seeing the bond between Data and Lal, he got emotional and felt sorry for his action. Lal, knowing that she will be separated from his father gets an emotion of fear. This created a permanent neural disorder in her system which was almost impossible to fix. So, she had to be deactivated. It was so heartbreaking for everyone in a starship.
In this episode, we can see that Data created Lal to continue his species in the Star ship. I don’t think allowing robots to recreate their own species is a good decision. Robots are made to ease the human activities. So, these creatures should be totally under human control because they are more accurate and powerful than humans. If they are allowed to make their own decision, then we cannot say that one day they won’t use the same tool to rule over humans. Being specific to my field of study, the computer plays a vital role. I cannot imagine my work without a computer because all of designing, creating and editing works are done in computers but it does not mean that I should be replaced by a computer. Use of Computers/ Technology should be bounded under certain limits.

Week #6 Response/Week #6 Response

Kayla’s Week 6 Response

Posted by Kayla Ye on

In the field of Chemical Engineering, there are many instances in which the use of technology is very dependent. If it were possible, to perfect the creation of androids, the better. In ChE, there is constant contact with elements of all kinds, those that are safe and those that aren’t. There are many instances where experiments cannot be carried out or discoveries cannot be made simply because the materials that are needed are too dangerous, high radioactivity, for example. This issue holds the scientific world back so much because as humans, there is a danger of losing ones’ life. However, if the science androids were able to be perfected, that would mean many opened doors. Like the robot who can perform surgery released a few years prior that had began the revolution of surgeries, androids would do so too in chemical engineering. On the other hand, to what extent can there be a guarantee that androids would be accepting of the dangers of this profession. As said in Star Trek: The Next Generation, android Data said his purpose was to “contribute in a positive way to the world in which we live in”, but until when will these androids become self-aware, that they are being treated differently. In a previous response, the question of the rights of androids in society; are they equivalent to a human, do the rights that humans posses they posses too? But whatever the decision is, what role does ethics play. There is no definite answer that these doors that opened up are the road to a more advanced society worthy of its risks because if it were, the rules of ethics would’ve been breached by a engineer already.

Week #6 Response/Week #6 Response

Week 6 Response

Posted by Carlton Yuan on

Star Trek: The Next Generation Season 3 Episode 16, “The Offspring” is about a human android that was created by Data named Lai. Data considers the android as his own child and the android even calls Data his father. Lai starts learning human behaviors by observing the crew members on the ship. In the end, a problem in the brain of Lai and causes the android to malfunction. Data soon finds out that he could not save Lai and so he deactivated the android. Lai was becoming more of a human by expressing its love towards her father data. This brings up the question of whether it was ethical to deactivate Lai. Creating an android like Lai could be really dangerous and it brings up the question whether it is ethical to create such creation. I am majoring in electrical engineering and one part of the IEE code of ethics is to not create anything that would be harmful to society. Although Lai the android was not a danger the society, one day someone could create an AI android that could take over the world. This reminds of Ultron from the Avengers. Tony Stark created Ultron in order to protect Earth. Ultron turns out to be a supervillain determined to destroy Earth. Tony created Ultron using material he was not familiar with, the mind stone. I believe that as long as you know what you’re doing and have the experience, it is ethical to create an android like Lai. Although Data was unable to save Lai, the creation of Lai is important for the advancement of technology.

 

Week #6 Response/Week #6 Response

ZhiHong li repsone #6

Posted by ZhiHong Li on

The video is very interesting how the AI/ the android able to think as much as we the human being. With my understanding toward the science field it might require lot of knowledge to code a robot’s brain to think as much as we can. We all watch video able how robot able to take, read, think independently but we haven’t seem any exist around us. Which robot is consider as robotic life, so is it same as human baby. Or just a development human being this is a question to consider. If the robot is consider the new baby then there will be some thing to think about. We know  that new born baby is consider pure but as human being grow up there always come to the division of good and bad. That seem to be a idea to think about. And also how should we consider the robot to be good or bad what should we base it on. And the question is that the robot it mention is create by another robot. That is scary or interesting is base on your point of view. Technology improved and the robot that code to create the robot that can thinking itself, so that is the great power of technology. Will the robot have the power over human being and taking over human being like how we did to the animal that once is on top of us. Will is be dangerous for robot to have it’s own mindset? We all will worry about this question because we, human will have conflict toward each other. So the conflict might evolve into greater problem if Ai have it’s own think involve in this case. And we know that Ai can look very similar as human being so if enemy use Ai for killing we are hard to deal with it.

Skip to toolbar