Announcement

Collapse
No announcement yet.

Next up, Robot weddings...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Gravekeeper View Post
    and yet somehow you leaned towards Social Woes?
    If a mod believes it should go in Grab Bag, I have no objection to them putting it there.

    Yes, I'm sure the time planning a PR stunt has set back robotics in Japan by *decades*.
    I never said it did.

    Seriously, what the hell is your problem here? I'm genuinely confused as to what your point is or why you posted this at all.
    It could be considered a social woe, because it's very possible that as these robots, and AI get more advanced, we could face some unintended consequences.

    I'm not even sure, yet, if a lot of these robots that are being worked on conform to Aisimov's laws of robotics.

    https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

    I'm just saying we have to be careful with what we create robots to do.

    Comment


    • #17
      Originally posted by mjr View Post
      I'm just saying we have to be careful with what we create robots to do.
      The problem is that you didn't say that till now. You didn't say that at any previous point - in the original post (so that we could actually debate something related to your post) or in your first reply.

      Comment


      • #18
        Originally posted by mjr View Post

        It could be considered a social woe, because it's very possible that as these robots, and AI get more advanced, we could face some unintended consequences.

        I'm not even sure, yet, if a lot of these robots that are being worked on conform to Aisimov's laws of robotics.

        https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

        I'm just saying we have to be careful with what we create robots to do.

        You should play The Talos Principle. I'd think you'd find it enlightening.
        I has a blog!

        Comment


        • #19
          Originally posted by mjr View Post

          It could be considered a social woe, because it's very possible that as these robots, and AI get more advanced, we could face some unintended consequences.
          I didn't see anything in the story you posted to indicate the robots at the wedding had any sort of AI. The robots were really only advanced in physical diversity and programming of movement.

          The topic interests me, it's actually one of my recurring nightmares (thanks will smith) but as much as I read about robotics and programming I've yet to see any signs that would be considered true AI. No where a computer did something that was outside the scope of its program or with any true sentience or will of its own, or could surprise its programmer. If you've seen one / some please share!

          As for the rules of robotics we already have self guiding missiles designed to kill, computers with self checks designed to destroy (there are some labs that study nasty diseases that are designed to self incinerate if certain areas are detected to leak) despite the potential for loss of life, machines that can be programmed to effect euthanasia. I think we've moved past the three laws of robotics a long time ago.

          Comment


          • #20
            I'm not even sure, yet, if a lot of these robots that are being worked on conform to Aisimov's laws of robotics.
            The laws of robotics are a fiction. A plot device used in a fictional series of short stories. Science Fiction is a "What If." It asks "What if (thing)?" In this case, the question is 'What if... There were robots taht could think like people... Except they had these? Would that be enough? Would that be good? Would we have any right to even give them these? What would such creatures want or think like?"

            Like in Foundation, Aasimov took a seemingly foolproof premise (The laws of robotics) and then toyed with it. And the laws of robotics are as much a real science as Psychohistory is. Unless you want us looking out for evil psychic clowns, too, we don't need to worry if these robots have that plot device in their heads.

            Additionally, there is no NEED for the laws of robotics yet. We don't have technology that would require us to limit them, because we do not yet have truly sapient machines. It's not until a robot can think for itself, that you would need to limit its own action. I wouldn't be surprised if we did (or if we didn't) reach the point that we have sapient machinery in our lifetime. Until then, though, you don't need to worry that these two will be non-compliant.

            This is a PR stunt. It is meant to make, well, this happen. People talk about their company. It has no lesser, nor greater, purpose than that. It isn't a 'Sign of things to come' (Though I will say if we do reach the point that machines are sapient, I'll advocate for their rights as well) it's a sign that a marketing exec deserves a pay raise.
            "Nam castum esse decet pium poetam
            ipsum, versiculos nihil necessest"

            Comment


            • #21
              Originally posted by mjr View Post
              It could be considered a social woe, because it's very possible that as these robots, and AI get more advanced, we could face some unintended consequences.
              That is a global concern to the development of AI. But it has nothing to do with a PR stunt robot wedding. Also, as mentioned, you said nothing of the sort in the op. You just posted a link and left it at that.

              Also, any AI Japan develops will be the least of our worries. Its any AI the US develops that will be worrisome as the US turns every technology towards warfare. In Japanese culture and beliefs, robots are essentially considered a sort of children. Hence the way they develop them and what applications they develop them for.


              Originally posted by mjr View Post
              I'm not even sure, yet, if a lot of these robots that are being worked on conform to Aisimov's laws of robotics.
              There is no AI on the planet advanced enough to really understand or follow Aisimov's laws of robotics. Plenty of robots that already violate them though. I mean, you do know the US has been killing people with drones for years, right?

              Comment


              • #22
                Originally posted by mjr View Post

                https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

                I'm just saying we have to be careful with what we create robots to do.
                I know this is a work of fiction, from an OLD video game (1993), and looking WAY too far ahead into the future, but this part of the intro for MegaMan X1 (SNES) spells out that point pretty well indeed (first paragraph from X's warning message on his capsule):

                "X" IS THE FIRST OF A NEW GENERATION OF ROBOTS WHICH CONTAIN AN INNOVATIVE NEW FEATURE - THE ABILITY TO THINK, FEEL, AND MAKE THEIR OWN DECISIONS. HOWEVER, THIS ABILITY COULD BE VERY DANGEROUS. IF "X" WERE TO BREAK THE FIRST RULE OF ROBOTICS, "A ROBOT MUST NEVER HARM A HUMAN BEING", THE RESULTS WOULD BE DISASTROUS AND I FEAR THAT NO FORCE ON EARTH COULD STOP HIM.

                Yeah and those of you who are familiar with how this video game series plays out (and the Terminator movie series is another good example) can see quite clearly what can happen, mjr pointed out, why we must be careful about what we create robots/cyborgs/computer software and so on to do.

                Comment


                • #23
                  I'd also like to point out I, Robot. The movie.

                  Not just the laws are important. How you allow the robot to interpret the laws is too.

                  Comment


                  • #24
                    Please don't bring up I, Robot, the movie. XÞ What about Age of Ultron? I could go for an omelette 'bout now…
                    "I take it your health insurance doesn't cover acts of pussy."

                    Comment


                    • #25
                      I'm not sure that the issue would necessarily come up, actually. The issue isn't artificial intelligence as such, but human-level artificial intelligence. If a robot of human-level intelligence was treated as subhuman, you are going to get issues. if they are treated as if they were human ( barring the obvious differences like the maintenance requirements robots have) then there sin't really any need for an AI rebellion.(and yes, part of this WOULD be figuring out if you even NEED an AI of human-level intelligence)

                      Comment


                      • #26
                        To echo what stabeler said, if a human level AI intelligence is created, it really is on us to legitimately treat it as a person. Some people will, some people inevitably won't. The latter will likely be the first against the wall. =p

                        This goes along with the current issue about how to actually develop a human level AI. As in, how do you teach things such as morality that such an intelligence would need in order to not make tragic mistakes or decisions. This will be where Japan takes over the world.

                        See, the current line of thinking about this is to create a learning mind instead of trying to create a fully functional out of the box intelligence. In effect, in order to get a truly human like AI ( as in capable of morality, compassion, etc ) you would start said AI off like a newborn child. Then you would quite literally raise it like a child. Thus the Japanese approach to robots and technology is quite likely to be a lot more successful about this than the rest of us. Both in terms of objectives and acceptance. Since we are, in essence, creating a new life form.

                        The second concern is that if we do not do this properly, when we inevitably trigger a technological singularity via AI, we won't like the results provided we even survive them. >.>

                        Comment


                        • #27
                          I think all of you are forgetting a rather important aspect of AI: self-awareness. How can any machine realize that it is oppressed without self-awareness? You could teach it morality because that's an exercise in doing to others, but the capability to learn anything doesn't necessarily make it an AI. There are already robots out there that are capable of learning (such as "Watson," the IBM supercomputer), but that doesn't make them AIs.

                          I suspect that creating an AI would involve giving a machine a soul, which would be a spontaneous event if it ever happened at all, and it happening once would be quite the stretch. I'm not really worried about AIs ever going rogue, the chance of even creating one seems so small that it's more effort to think about it than it's worth.

                          However, that does NOT mean a programming error in any sort of computer couldn't spell disaster. I know that programming is an exact job- a missing backslash or typo can result in NOTHING working like it's supposed to. With the Internet as massive as it is, it wouldn't be impossible for something disastrous to happen because of a typo somewhere.

                          Comment


                          • #28
                            Originally posted by Aragarthiel View Post
                            I think all of you are forgetting a rather important aspect of AI: self-awareness.
                            Er, I don't think anyone was forgetting that at all. More like self awareness is kind of a given/obvious part of the topic so there's not much need to mention it. Self awareness is the proverbial Holy Grail of creating a true artificial intelligence.



                            Originally posted by Aragarthiel View Post
                            I suspect that creating an AI would involve giving a machine a soul, which would be a spontaneous event if it ever happened at all, and it happening once would be quite the stretch.
                            That's a really weird thing to insert into the discussion and a completely different topic.


                            Originally posted by Aragarthiel View Post
                            I'm not really worried about AIs ever going rogue, the chance of even creating one seems so small that it's more effort to think about it than it's worth.
                            Theoretically, we're probably in just as much danger if not more danger from an AI that is not self aware. As anything negative that occurs, be it from programming error, malfunction, sabotage, etc will merely be a function the AI is carrying out. Rather than a conscious decision its capable of understanding and reflecting on or an error within itself it can recognize and correct.

                            I mean, would you rather a machine overlord that's trying to kill you because it legitimately hates you but can be reasoned with? Or a machine overlord that's trying to kill you because its missing a dll and stuck in "Kill" mode?

                            As for the effort, uh, have you met Google's dreaming cat obsessed neural network yet? The entire point of Google DeepMind is to essentially build a computer network that mimics the structure and operation of a human mind.
                            Last edited by Gravekeeper; 07-27-2015, 11:09 PM.

                            Comment


                            • #29
                              AND do not forget Omnius and Erasmus Thinking machines from the Dune Universe.
                              I'm lost without a paddle and I'm headed up sh*t creek.

                              I got one foot on a banana peel and the other in the Twilight Zone.
                              The Fools - Life Sucks Then You Die

                              Comment


                              • #30
                                The TV show "Person of Interest" has some interesting plotlines concerning AIs, for those interested in that. Even if not, the show still makes for some great TV.

                                Yes, you need to work a bit on your suspension of disbelief at times (the show sometimes stretches credibility). But I find the stories, and especially the characters, well worth the occasional logic hole.
                                "You are who you are on your worst day, Durkon. Anything less is a comforting lie you tell yourself to numb the pain." - Evil
                                "You're trying to be Lawful Good. People forget how crucial it is to keep trying, even if they screw it up now and then." - Good

                                Comment

                                Working...
                                X