Electric Tortoise
Short but stylish and effective, this narrative-based bit of interactive art takes cues from classics like I, Robot as you interrogate a robot suspected of murder... even though he doesn't exactly see it that way.
Read MoreShort but stylish and effective, this narrative-based bit of interactive art takes cues from classics like I, Robot as you interrogate a robot suspected of murder... even though he doesn't exactly see it that way.
Read More
Powerful, Sketchy, and quite intriguing. If the game went to the main screen without needing to be refreshed, then the game would get a 5/5. Till then, 4/5.
If it had a proper ending instead of just fading to black would also be nice, as would having it release your cursor instead of needing to press escape.
Rick Holden is a concatenation of 2 Blade Runner characters.
The office is clearly the Leon/Holden scene, ditto the title.
That "Will I dream?" isn't that a line from HAL-9000?
I'm guessing with the different responses there are many different outcomes. I chose NOT to shot the robot.
What plug-in does it use? When I try to play, it says I don't have the right plug-in, but it doesn't tell me what I need.
Hey guys, developer here.
Really appreciate the write-up and kind words.
Nathaniel, I've updated the game so it returns to the beginning. Thank you for your feedback, and thank you for playing. =]
Ethan, you need the Unity web player plug-in. You can also download the game off gamejolt.com if the web player doesn't work for you.
A thing, though...
The Three Laws of Robotics (and then the Zeroth Law) are formulated in the context that robots shall be designed only to obey these laws (i.e never have consciousness, emotions, etc), which they'll absolutely obey to humanity, and hence giving benifit to them.
Therefore, actually the situation in the game will never happen at all, simply because the robot isn't supposed to have the capability to "think" in the first place.
Another thing:
The first law always overrides the second law, so no matter what the robot is instructed to do, it cannot assist in a suicide.
Note quite, @argyblarg.
Look at The Caves of Steel and The Naked Sun. The key word is 'knowingly'. In the Asimov stories, robots are indeed accessories to murder by smuggling weapons (including one that detaches his arm to give to someone who uses it as a club), and a robot that looks like a human causes someone who is pathologically afraid of actual contact with others to be scared to death.
That being said, my take on things is that a non-malfunctioning robot should find the harm of death greater than the harm of suffering, and come down on the side of not assisting in the suicide.
Interesting. Am I right in saying that the name 'Electric Tortoise' is a nod to the Phil K Dick short story 'The Electric Ant'?
And yes, Hal 9000 says 'will I dream?' in the film version of 2010.
Mildly enjoyable.
Can I just say, I really dislike the new JIG system of the weird shaped game window box, with the review underneath. It's.....not my cup of tea.
This work is interesting and enjoyable, but I find the basic premise to be a bit annoying.
Assuming an Asimov-style three laws robot, then what I've read of those stories seems to indicate that there should be only three possible outcomes to this scenario:
1) The robot's first law duty to prevent harm through inaction would overrule its second law duty to follow orders, and it would seek outside help for John.
2) The robot's first law duty to prevent harm through inaction would be interpreted as not yet coming into effect, and while the robot still could not be ordered to kill him, John could've killed himself by sending the robot on an errand while he committed suicide.
3) The robot's first and second law duties would match perfectly in importance of application, creating a hazard condition and causing the robot to cease functioning effectively, if at all.
If the robot believed that preventing injury through inaction (the robot must not allow to John to continue suffering) outweighed preventing injury by direct action (the robot must not kill John), then it would also outweigh the second law entirely, allowing the robot to call in outside help over John's wishes, rather than assist in his suicide.
I believe that the apparent dichotomy in the first law which is being exploited here does not actually exist if we follow Asimov's examples. It could be argued that the author isn't following Asimov that closely, but the allusions are very strong.
I enjoyed the story. On my second play though, I was disappointed to find that the player's options had next-to-no bearing on the story. The script appears to have only a few words altered based on your last response and otherwise tells the exact same story. I feel this was a good base for something that could have had more layers to the story.
Update