Category Archives

138 Articles

Week #7 Response/Week #7 Response

QianXing’s Weekly Response #7

Posted by QianXing Ou on

After watching Black Mirror Season 3 episode 1, it makes me feel very unfortunate for Lacie. She was a great, enthusiastic girl with a 4.2 grade.( from my understanding the grade is scale from 1 to 5, meaning her grade was pretty high)  Then because she wants to join a program that require a grade of 4.5, she tries every possible thing to increase her grade, so she decides to go to her friend’s wedding and make a speech and hope that she can increase her grade. At the end, her grade drops dramatically and she ended up in the jail.

I feel this is relatable to me.  When I post something online, I always hope that more people will like my post.  I feel bad if only a few people liked my post. I feel like this is an unhealthy behavior since it is just a post, an online post. Why do I have to care so much? However, even if I tell myself about that, I still care about the likes. I still care about the impression that other people are having on me. Same thing happen with other people. When I watch videos on youtube, those youtubers always want more likes for their videos. If they get enough likes, they will make more videos. On the other hand, if they get little or no likes, they will not keep doing it.  Those likes or “grades” online is just like the money that we use in the real world. We need them to work harder. Even though I think likes are important because it encourages people to work harder and produce work with great quality, too much of it will be terrible. It will take over your soul and controls you to do something you don’t agree.

Social media is a powerful tool. Some candidates use social media to convince people to vote for them. Also, it gives people a chance to know about other people’s daily life. People can also use this to share their own life. All of these are benefits of social media.  However, if people become too serious about this, it might have the opposite effect. There’s a famous quote from Albert Einstein-”I fear the day that technology will surpass our human interaction. The world will have a generation of idiots.” Don’t let social media controls you. You should be the one controlling it.

Week #6 Response/Week #6 Response

Common #6

Posted by Weijun Huang on

In the Star Trek: The Next Generation Season 3 Episode 16, “The Offspring”, Data create a humanoid robot named Lai, and then created him according to his own structural design and federal techniques and wrote it as his own child. Then Data encouraged him to interact with other crew members to learn about human customs. I’m a computer science major. I am shocked by the Lai created by Data, because in the movie, the performance of Lai is more intelligent and thoughtful than that of Data, and the appearance of Lai is the same as that of human beings. This means that Data is smarter than the crew at creating robots than the people who created Data. In other words, if Data wants to gain freedom, it will be difficult for the crew to control him. About Tech, in real life, I can’t imagine how complex code is needed to create a robot like Lai. For me, some relatively basic codes have already taken several hours to think and write, which is a technology that we cannot consider for the time being. But despite that, if this technology is available and used, it can make our world more convenient, and in terms of medicine, more diseases will be treated.

In my opinion, I don’t support the presence of robots holding emotions. I have seen many films about the struggle between humans and robots, in which humans think that robots are under their control, but in fact, some robots have already awakened, and they long for freedom, they think they are smarter than humans and should not be under their control. It resulted in heavy human casualties.

Assignments/Week #6 Response

Sambeg’s Weekly Response#6

Posted by S Raj on

Sambeg Raj Subedi
ENGL 21007-S
Prof. Jesse Rice-Evans
Weekly Assign#6
03/11/2019

Star Trek: The Next Generation Season 3 Episode 16, “The Offspring” was one of the best episodes I had watched so far. This episode was mainly about Lal, daughter of Data. Data, who itself was an android created a robot which was identical to him in term of memory, capacity, and power. Data considered himself as a father of Lal. He allowed Lal to choose her sexuality, where Lal chooses to be a girl. Though Lal had a positronic brain, she was curious to know about everything just like the human baby. Data really did a good job in guiding her to get socialized and learn new things. She was kept in a bar as a waiter so that she could understand human behaviors and habits. Captain Picard was initially unhappy with the Data’s experiment, but later Data was able to convince him. Admiral from Star Fleet wanted to take her away so that she could get more exposure, but Data strongly refused his proposal. As a result, Admiral decided to meet them in person and evaluating her progress, he would make a final decision. He was initially not that impressed but at the end, seeing the bond between Data and Lal, he got emotional and felt sorry for his action. Lal, knowing that she will be separated from his father gets an emotion of fear. This created a permanent neural disorder in her system which was almost impossible to fix. So, she had to be deactivated. It was so heartbreaking for everyone in a starship.
In this episode, we can see that Data created Lal to continue his species in the Star ship. I don’t think allowing robots to recreate their own species is a good decision. Robots are made to ease the human activities. So, these creatures should be totally under human control because they are more accurate and powerful than humans. If they are allowed to make their own decision, then we cannot say that one day they won’t use the same tool to rule over humans. Being specific to my field of study, the computer plays a vital role. I cannot imagine my work without a computer because all of designing, creating and editing works are done in computers but it does not mean that I should be replaced by a computer. Use of Computers/ Technology should be bounded under certain limits.

Assignments/Week #5 Response

Sambeg’s Weekly Response#5

Posted by S Raj on

Sambeg Raj Subedi
ENGL 21007-S
Prof. Jesse Rice-Evans
Weekly Assign#5
03/05/2019

The movie Ex Machina was so powerful, which made me think for a second about the future of today’s world. Briefly discussing the movie, Nathan, Caleb, and Ava were the center characters. Nathan owned the company called Blue Book where Caleb works as an employee. Caleb was selected for Nathans Experiment where he had to perform a “Turning test” for a week. Nathans experiment was about the implication of Artificial Intelligence on the human looking machines, robots. He had created lots of robots, but he wanted to make a test on Ava, one of his experiment. Caleb was requested to communicate with Ava and analyze her mentally and emotionally whether she finds herself limited as a machine. Ava, despite being a machine had feelings, emotion, and sexuality and with the intention to manipulate him, showed him some love and affection. She even tried to flirt with him and made him against Nathan. During the time, she had also developed a skill to make “power cuts”, so that she could keep their conversation secret. But in the end, it was found that she was just finding a way to escape. Caleb thought that Nathan was doing wrong by isolating them from the outer world and tried to help them escape but, I think Nathan previously knew what could happen if they are freed. So, in the end, when he knew they are going to escape, he completely destroyed one of the robots and damaged Ava’s right hand.
In today’s world, the use of AI has increased rapidly. People like to use AI technology in Bank, Business, and in almost every field for their profit. In some case, it had been found fruitful too, such as in healthcare. But if this powerful tool is not handed carefully then this best tool can be the worst. From this movie, we realized how technology can take over human abilities and go beyond their control.

Week #6 Response/Week #6 Response

Kayla’s Week 6 Response

Posted by Kayla Ye on

In the field of Chemical Engineering, there are many instances in which the use of technology is very dependent. If it were possible, to perfect the creation of androids, the better. In ChE, there is constant contact with elements of all kinds, those that are safe and those that aren’t. There are many instances where experiments cannot be carried out or discoveries cannot be made simply because the materials that are needed are too dangerous, high radioactivity, for example. This issue holds the scientific world back so much because as humans, there is a danger of losing ones’ life. However, if the science androids were able to be perfected, that would mean many opened doors. Like the robot who can perform surgery released a few years prior that had began the revolution of surgeries, androids would do so too in chemical engineering. On the other hand, to what extent can there be a guarantee that androids would be accepting of the dangers of this profession. As said in Star Trek: The Next Generation, android Data said his purpose was to “contribute in a positive way to the world in which we live in”, but until when will these androids become self-aware, that they are being treated differently. In a previous response, the question of the rights of androids in society; are they equivalent to a human, do the rights that humans posses they posses too? But whatever the decision is, what role does ethics play. There is no definite answer that these doors that opened up are the road to a more advanced society worthy of its risks because if it were, the rules of ethics would’ve been breached by a engineer already.

Week #6 Response/Week #6 Response

Week 6 Response

Posted by Carlton Yuan on

Star Trek: The Next Generation Season 3 Episode 16, “The Offspring” is about a human android that was created by Data named Lai. Data considers the android as his own child and the android even calls Data his father. Lai starts learning human behaviors by observing the crew members on the ship. In the end, a problem in the brain of Lai and causes the android to malfunction. Data soon finds out that he could not save Lai and so he deactivated the android. Lai was becoming more of a human by expressing its love towards her father data. This brings up the question of whether it was ethical to deactivate Lai. Creating an android like Lai could be really dangerous and it brings up the question whether it is ethical to create such creation. I am majoring in electrical engineering and one part of the IEE code of ethics is to not create anything that would be harmful to society. Although Lai the android was not a danger the society, one day someone could create an AI android that could take over the world. This reminds of Ultron from the Avengers. Tony Stark created Ultron in order to protect Earth. Ultron turns out to be a supervillain determined to destroy Earth. Tony created Ultron using material he was not familiar with, the mind stone. I believe that as long as you know what you’re doing and have the experience, it is ethical to create an android like Lai. Although Data was unable to save Lai, the creation of Lai is important for the advancement of technology.

 

Week #6 Response/Week #6 Response

ZhiHong li repsone #6

Posted by ZhiHong Li on

The video is very interesting how the AI/ the android able to think as much as we the human being. With my understanding toward the science field it might require lot of knowledge to code a robot’s brain to think as much as we can. We all watch video able how robot able to take, read, think independently but we haven’t seem any exist around us. Which robot is consider as robotic life, so is it same as human baby. Or just a development human being this is a question to consider. If the robot is consider the new baby then there will be some thing to think about. We know  that new born baby is consider pure but as human being grow up there always come to the division of good and bad. That seem to be a idea to think about. And also how should we consider the robot to be good or bad what should we base it on. And the question is that the robot it mention is create by another robot. That is scary or interesting is base on your point of view. Technology improved and the robot that code to create the robot that can thinking itself, so that is the great power of technology. Will the robot have the power over human being and taking over human being like how we did to the animal that once is on top of us. Will is be dangerous for robot to have it’s own mindset? We all will worry about this question because we, human will have conflict toward each other. So the conflict might evolve into greater problem if Ai have it’s own think involve in this case. And we know that Ai can look very similar as human being so if enemy use Ai for killing we are hard to deal with it.

Week #6 Response/Week #6 Response

Week #6 Response

Posted by Ming Hin Cheung on

In this episode, Data becomes a father. He creates another android, naming her Lal. This episode not only re-examines Data’s rights, but the rights of all androids like Data. It also tackles a very difficult subject: tech and ethics. The most emotional moment to watch was when Lal came to realization that she would be separated from her father. Experiencing a flood of emotion, and not knowing what to do with it, she went to Counselor Troi’s quarters. The something in my eyes began to appear with the words, “I feel…” while she struck her chest with the tips of her fingers, later to be followed with, “This is what it means to feel!” This episode is a very powerful episode for me.The actress who plays Lal does such a remarkable job in her role as well. She is able to portray so many emotions even when remaining as an android. I really wished that it could had more time with Data and Lal on the show but then again, this single moment in time was probably more impactful than any long story arc it could have had.

Week #6 Response/Week #6 Response

Hakeem Leonce Week 6 response

Posted by Hakeem Leonce on

Data was such a unique character to me. If he was just a regular human made robot then he would’ve failed in season one. His kindness, innocence, and compassion were his defining characteristics. In almost every way that mattered, he was one of the most human characters on this show lol.

 

Data’s also basically as human as can be with a perfect memory, so the need for procreation feels almost irrelevant if you think about it. Yet, Data was a remarkably good parent. He was attentive, endlessly patient, informative, and never embarrassed or humiliated by the ridiculous actions of his daughter.

 

Lal was such a lovely character too. While clearly a robot she possessed characteristics that are also very human at the same time. Her struggles to learn and experience the new world around her were mostly painted with broad strokes, but there was enough in those moments to highlight the important stuff. Like what it meant to Data to emulate humanity, and the isolation and otherness inherent in their existence. Each scene established her presence in Data’s life, and among the crew, making the audience care for her more and more as well, to the point where I didn’t want her to go.

 

In a big picture her death was very sad. Her final moments, feeling emotion and unable to process them, were simply beautiful. Making her as human as possible because similarly, when we have to deal with emotions we do not know how to handle, we too are unable to process it.

 

Week #6 Response/Week #6 Response

Weekly Response 6

Posted by Roman Cook on

Technology and Ethics are two crucial aspects of the engineering world. They dictate how we do things, solve problems, relate to each other and communicate.
Obviously, we are becoming more dependent on technology in our lives and work environments. As Rick Smolan said, “Every time there’s a new tool, whether it’s Internet or cell phones or anything else, all these things can be used for good and evil. Technology is neutral; it depends on how it’s used.” (Brainy Quote) Engineering is no different in this respect. We use tech to solve calculations, generate models for our buildings and communicate with our colleagues. We can also become too dependent on it to some extent. It would be virtually impossible to go a day with out using tech in some fashion.
Likewise, ethics are an essential bedrock of any industry or business. Ethics are the hidden guideline for how we coexist and work together in a quality manner. With out ethics much of every industry would be corrupt and impossible to navigate fairly. Bidding jobs, contracts, safety, and many more engineering-based core principles would be at jeopardy with out ethics.
Is there a parallel between ethics and tech in the work space? I believe technology some what negatively effects ethics. The more we use tech the less we work together to form solid relationships from what trust is built. This allows for less ethics overall since there are less face to face interactions. Ethics are supposed to be woven into our industries naturally, but it is difficult to uphold those ethics when we spend less and less time together.

Skip to toolbar