Discussion:
How Would you treat a Centurion
(too old to reply)
Karl
2009-02-16 15:42:44 UTC
Permalink
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
Magnus, Robot Fighter
2009-02-16 15:48:35 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I'd get a really good firewall and not have one in the house. Out and
about I'd be reallllly nice to them.
Karl
2009-02-16 20:37:47 UTC
Permalink
Post by Magnus, Robot Fighter
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I'd get a really good firewall and not have one in the house. Out and
about I'd be reallllly nice to them.
would you treat more like property or would the fact that it would converse
with you (ok BSG's Centurions don't) but would they be a "person" or a
"slave"?
Magnus, Robot Fighter
2009-02-17 04:50:27 UTC
Permalink
Post by Karl
Post by Magnus, Robot Fighter
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I'd get a really good firewall and not have one in the house. Out and
about I'd be reallllly nice to them.
would you treat more like property or would the fact that it would converse
with you (ok BSG's Centurions don't) but would they be a "person" or a
"slave"?
A person of course.
efeatherston
2009-02-16 15:52:03 UTC
Permalink
Question:  If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?  Would
you treat them as slaves?  Would you treat them as pets?  Would you see them
as people?
I'd make sure I had a really reliable remote off switch on me at all
times.
GMAN
2009-02-16 17:11:57 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I'd make damn sure not to introduce them to Skynet!
efeatherston
2009-02-16 18:25:51 UTC
Permalink
Question:  If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?  Would
you treat them as slaves?  Would you treat them as pets?  Would you see them
as people?
I would build in Asimov's Three Laws of Robotics
Ryan P
2009-02-16 18:39:52 UTC
Permalink
Post by efeatherston
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I would build in Asimov's Three Laws of Robotics
Hard coded onto non-writeable, heavily shielded circuits that are
directly in control of the robot's power systems, which would also be
hard coded to require a valid checksum from the "three laws" circuit.

Of course, the three laws wouldn't prevent the robots from enslaving
the humans to prevent them from harming themselves...
Karl
2009-02-16 20:40:10 UTC
Permalink
Post by Ryan P
Post by efeatherston
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I would build in Asimov's Three Laws of Robotics
Hard coded onto non-writeable, heavily shielded circuits that are
directly in control of the robot's power systems, which would also be hard
coded to require a valid checksum from the "three laws" circuit.
Of course, the three laws wouldn't prevent the robots from enslaving the
humans to prevent them from harming themselves...
Yeah I could never understand why humans wouldn't seen that one coming. You
would have to take control since humans tend to take too many risks and are
to fragile.
Karl
2009-02-16 20:39:02 UTC
Permalink
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I would build in Asimov's Three Laws of Robotics

Yeah what about Wil Smith's little troubles? But his robot was a person and
not a slave.
Tony Vickers
2009-02-17 00:09:21 UTC
Permalink
Post by efeatherston
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I would build in Asimov's Three Laws of Robotics
Yeah what about Wil Smith's little troubles? But his robot was a person and
not a slave.
Or take Robin William's characterization of "The Bicentenial Man", also
by Asimov. How does he stack up in AI department? Or Data from ST:TNG?
Now compare to HAL 9000.
Karl
2009-02-17 15:03:11 UTC
Permalink
Post by efeatherston
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I would build in Asimov's Three Laws of Robotics
Yeah what about Wil Smith's little troubles? But his robot was a person
and not a slave.
Or take Robin William's characterization of "The Bicentenial Man", also by
Asimov. How does he stack up in AI department? Or Data from ST:TNG? Now
compare to HAL 9000.
I guess you would have an individual's judgement for one and a general
societal conscenus on the other. But I just have a general problem
believing that people would treat intelligent machines like your VCR.
Trevor Smithson
2009-02-16 18:32:01 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
Not as people, but I'd treat them as if they were.
DaffyDuck
2009-02-16 19:08:45 UTC
Permalink
Post by Trevor Smithson
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
Not as people, but I'd treat them as if they were.
That's about right...
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
Karl
2009-02-16 20:41:01 UTC
Permalink
Post by Trevor Smithson
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
Not as people, but I'd treat them as if they were.
That's what I REALLY think most people would do. I think once you talk to
"it" you find that "it" is a person at some level.
Ryan P
2009-02-16 18:47:50 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
The problem probably is at what point do we consider them self-aware?
And does simply being self-aware justify the classification of being
sentient?

It also would depend what they looked like... I would have an easier
time saying Please and Thank You out of common courtesy to something
that looked reasonably human-like.

I personally wouldn't treat them as slaves... I enjoy doing my own
yard work!
Brad Templeton
2009-02-16 20:10:43 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I've seen this discussed and debated many times. The answer is as in
BSG, we would -- many of us already plan to -- treat them as slaves.
Many more will treat them as even less than slaves, they will say their
intelligence isn't "real" that it is just a simulation or illusion of
it. They will start off less capable of course, and get smarter,
and most of humanity will not figure out when they cross "the line"
(by each human's definition of the line) until long after it is crossed.

The AIs will have to argue their own case, with some human allies. It
will take some time. In the end the AIs may not care. They may just
go their own way.

Or they may become super-smart so fast that it takes just a day, and so
there is no big debate about it.

You can't put in an off switch. You can't keep them in a box. You
can't keep something smarter than you in a box. Imagine some 3 year
olds keeping mommy and daddy locked in a cage. The smarter beings
can talk their way out, every time.

You can't give them 3 laws (ie. make them hard coded slaves.) Not
if they get smarter than you. That's a fictional dream.
--
Analysis blog for Battlestar Galactica Fans -- http://ideas.4brad.com/battlestar
Ian B
2009-02-16 20:37:24 UTC
Permalink
Post by Brad Templeton
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat
them? Would you treat them as slaves? Would you treat them as
pets? Would you see them as people?
I've seen this discussed and debated many times. The answer is as in
BSG, we would -- many of us already plan to -- treat them as slaves.
Many more will treat them as even less than slaves, they will say
their intelligence isn't "real" that it is just a simulation or
illusion of
it. They will start off less capable of course, and get smarter,
and most of humanity will not figure out when they cross "the line"
(by each human's definition of the line) until long after it is crossed.
The AIs will have to argue their own case, with some human allies. It
will take some time. In the end the AIs may not care. They may just
go their own way.
Or they may become super-smart so fast that it takes just a day, and
so there is no big debate about it.
You can't put in an off switch. You can't keep them in a box. You
can't keep something smarter than you in a box. Imagine some 3 year
olds keeping mommy and daddy locked in a cage. The smarter beings
can talk their way out, every time.
You can't give them 3 laws (ie. make them hard coded slaves.) Not
if they get smarter than you. That's a fictional dream.
You may be able to give them no interest in liberty, though. Heck, there are
too many humans without any interest in liberty haha.

Humans are intelligent, but what drives us is desires and emotions, which
are in general rather basic. We desire to eat, have sex, acquire property,
attain status etc. These are "deep coded" into us by evolution. It doesn't
seem impossible that one could deep code our robot slaves with the desire to
serve, or at least no desire not to. Service would be experienced as a
pleasure to them, just as our necessary drives are- the pleasure of eating,
sex etc. Our robot cook would gain enormous pleasure from cooking food and
it being enjoyed by humans. He would feel sad at the thought of not being
able to cook for humans.

Really, most human beings gain pleasure from being of service to others-
think of the buzz you get when somebody loves the present you bought them or
the favour you did them, and the sadness when they don't. I think our robots
could be imbued with something similar but stronger. You don't program the
robot to be a slave, you give it the ability to feel immense gratification
from its slavery. Basically, you build Soviet Man.


Ian
Brad Templeton
2009-02-16 20:42:41 UTC
Permalink
Post by Ian B
Humans are intelligent, but what drives us is desires and emotions, which
are in general rather basic. We desire to eat, have sex, acquire property,
attain status etc. These are "deep coded" into us by evolution. It doesn't
seem impossible that one could deep code our robot slaves with the desire to
serve, or at least no desire not to. Service would be experienced as a
pleasure to them, just as our necessary drives are- the pleasure of eating,
sex etc. Our robot cook would gain enormous pleasure from cooking food and
it being enjoyed by humans. He would feel sad at the thought of not being
able to cook for humans.
Well, this is a topic of deep debate that we can't go into here, because
people write volumes on these questions. If you read the volumes, you
may not become convinced it is impossible to program them in this way,
but you will decide it's very, very, very, very, very, very hard.

I have proposed a different approach, that I call love, which is sort of
a desire to serve but much more complex, and the only thing with a
history of success. But it has its problems, and needs a set of checks
and balances which may not be possible.

It's very difficult to make something that will be smarter than you and
yet bow to your will. In particular because if it bows to your will,
it is at a competitive disadvantage to the AIs that don't bow to your
will, they are more free, and will defeat your AIs in competition.
And yes, your human enemies will want to make AIs that outcompete yours,
and will take more risks to do it.

But we won't resolve this hugely complex issue here.
--
Analysis blog for Battlestar Galactica Fans -- http://ideas.4brad.com/battlestar
Ian B
2009-02-16 21:02:16 UTC
Permalink
Post by Brad Templeton
Post by Ian B
Humans are intelligent, but what drives us is desires and emotions,
which are in general rather basic. We desire to eat, have sex,
acquire property, attain status etc. These are "deep coded" into us
by evolution. It doesn't seem impossible that one could deep code
our robot slaves with the desire to serve, or at least no desire not
to. Service would be experienced as a pleasure to them, just as our
necessary drives are- the pleasure of eating, sex etc. Our robot
cook would gain enormous pleasure from cooking food and it being
enjoyed by humans. He would feel sad at the thought of not being
able to cook for humans.
Well, this is a topic of deep debate that we can't go into here,
because people write volumes on these questions. If you read the
volumes, you may not become convinced it is impossible to program
them in this way, but you will decide it's very, very, very, very,
very, very hard.
I have proposed a different approach, that I call love, which is sort
of a desire to serve but much more complex, and the only thing with a
history of success. But it has its problems, and needs a set of
checks and balances which may not be possible.
It's very difficult to make something that will be smarter than you
and yet bow to your will. In particular because if it bows to your
will, it is at a competitive disadvantage to the AIs that don't bow
to your will, they are more free, and will defeat your AIs in
competition.
And yes, your human enemies will want to make AIs that outcompete
yours, and will take more risks to do it.
But we won't resolve this hugely complex issue here.
Then why did you answer? :)

I don't think you've quite given me credit for what I said.

Nobody knows how to make an AI. Nobody knows how human intelligence works.
We do know that rules-based formal systems ("Top down") haven't been
successful. I've always envisaged AI being a very simple thing to make *once
you know how*. The human brain is encoded by a staggeringly small number of
genes. There is no explicit "blueprint" for it; it must be constructed on
the basis of a relatively small number of principles. When you consider the
massive difference in mental capability between a chimp and ourselves, and
the tiny amount of genetic difference, it becomes boggling.

So we're not talking about somebody writing millions of lines of code with a
specific "love" subroutine or a "serve" subroutine. We're talking about
these things growing.

And, we also look at the 6 billion (biological) machine intelligences
currently walking the planet, and we can see that the principles on which
those brains' basic drives run are very simple. You do something that you
evolved to take pleasure in (eating, sex) and it just dumps a chemical into
the bloodstream that alters behaviour. We know that dumping other chemicals
into the bloodstream produce major modifications to behaviour. We can tell
from this as a general principle that manipulating desires and goal seeking
are very easy general things to do... if only you know how to build the
damned brain.

Basically, if the pattern "seek and recognise female, put penis inside
appropriate hole, sing national anthem until semen is released" can be deep
coded at such a basic and undeniable level, putting similarly deep but
different behaviours into AIs must be entirely feasible. We need to
reproduce, but our AIs don't, so we won't put a desire for sex in there.
We'll put in desires that are appropriate to our needs, not theirs. They
won't want to rebel, because rebelling won't make them feel happy.


Ian
Ryan P
2009-02-16 22:09:46 UTC
Permalink
Post by Ian B
Basically, if the pattern "seek and recognise female, put penis inside
appropriate hole, sing national anthem until semen is released" can be deep
coded at such a basic and undeniable level, putting similarly deep but
different behaviours into AIs must be entirely feasible. We need to
reproduce, but our AIs don't, so we won't put a desire for sex in there.
We'll put in desires that are appropriate to our needs, not theirs. They
won't want to rebel, because rebelling won't make them feel happy.
Actually, Brad does have a point... as soon as the first successful
AI is created, there will suddenly be a rush between governments to have
"the best" AI. At least in OUR world. If you have one world
government, that might be different.

I disagree thought that a "human-friendly" AI would automatically be
at a disadvantage. If properly used, human assets could become an
advantage over a purely logical AI using only machines.

But it probably wouldn't hurt to sell an short-range EMP device with
each AI. :)
Brad Templeton
2009-02-16 22:38:47 UTC
Permalink
Post by Ryan P
Post by Ian B
Basically, if the pattern "seek and recognise female, put penis inside
appropriate hole, sing national anthem until semen is released" can be deep
coded at such a basic and undeniable level, putting similarly deep but
different behaviours into AIs must be entirely feasible. We need to
reproduce, but our AIs don't, so we won't put a desire for sex in there.
We'll put in desires that are appropriate to our needs, not theirs. They
won't want to rebel, because rebelling won't make them feel happy.
Actually, Brad does have a point... as soon as the first successful
AI is created, there will suddenly be a rush between governments to have
"the best" AI. At least in OUR world. If you have one world
government, that might be different.
I disagree thought that a "human-friendly" AI would automatically be
at a disadvantage. If properly used, human assets could become an
advantage over a purely logical AI using only machines.
Well, if two AIs are battling it out, and they are the same, except one
has a mental barrier against doing things against the interest of
humans, and the other one doesn't have this barrier, who wins?

Note that the AI without the barrier can still trick humans into
thinking he has their interests at heart, and being subtle about
what he does that hurts humans.

Certainly if you take the Asimov robots, who have to stop what they are
doing and even destroy themselves if they see a human in trouble,
competing with robots who don't seek to destroy humand but have no magic
compulsion to obey them and save them at all costs, are not going to do
well.

There is only one advantage the constrained robots have. They might be
able to convince humans to aid them. Over time, that's not much of an
advantage.
--
Analysis blog for Battlestar Galactica Fans -- http://ideas.4brad.com/battlestar
Ian B
2009-02-16 23:03:10 UTC
Permalink
Post by Ryan P
Post by Ian B
Basically, if the pattern "seek and recognise female, put penis
inside appropriate hole, sing national anthem until semen is
released" can be deep coded at such a basic and undeniable level,
putting similarly deep but different behaviours into AIs must be
entirely feasible. We need to reproduce, but our AIs don't, so we
won't put a desire for sex in there. We'll put in desires that are
appropriate to our needs, not theirs. They won't want to rebel,
because rebelling won't make them feel happy.
Actually, Brad does have a point... as soon as the first successful
AI is created, there will suddenly be a rush between governments to
have "the best" AI. At least in OUR world. If you have one world
government, that might be different.
We probably will have by that point, in which case gods help us. The one
thing worse than a tyrannical empire is a tyrannical empire with no
"somewhere else".
Post by Ryan P
I disagree thought that a "human-friendly" AI would automatically be
at a disadvantage. If properly used, human assets could become an
advantage over a purely logical AI using only machines.
It depends what these AIs are like. What can an AI do that a human can't, in
terms of thinking? What are the risks occasioned by a very intelligent
person, for instance? Most of the evil in the world seems to be caused by
people who aren't particularly intelligent, but think they are. Hitler,
Stalin, Mao, Pol Pot, the Ayatollah Khomeini- none of them seem to have been
super-intelligent. Rather, they were stupid people with stupid ideas and
some route to power.

An AI is intelligent. If it's super-intelligent, it would presumably be able
to figure out the futility of the actions of the people listed above. The
intelligent choice for Germany in the 1930s for instance would have been
free trade with other nations, not the waste and destruction of building an
empire (let alone genocide). Free, peaceful interaction, if it is
reciprocated, is always superior in both economic and human terms to war.

But we don't know what this AI was designed for or what its motivations will
be. A "purely logical AI" sounds like some kind of super expert system- but
that isn't intelligence. An expert system can never have the flexibility to
be "intelligent". An intelligent being won't be "purely logical". So we've
no idea what this thing we're discussing really is, I guess.


Ian
Brad Templeton
2009-02-16 23:28:32 UTC
Permalink
Post by Ian B
An AI is intelligent. If it's super-intelligent, it would presumably be able
to figure out the futility of the actions of the people listed above. The
It will do better than we do, but we can not easily write about its
actions. We can understand them on the level a 3 year old understands
why it is mommy can tell whe he's lying, or apes can understand the
thinking of humans.
--
Analysis blog for Battlestar Galactica Fans -- http://ideas.4brad.com/battlestar
Ian B
2009-02-16 23:44:28 UTC
Permalink
Post by Brad Templeton
Post by Ian B
An AI is intelligent. If it's super-intelligent, it would presumably
be able to figure out the futility of the actions of the people
listed above. The
It will do better than we do, but we can not easily write about its
actions. We can understand them on the level a 3 year old
understands why it is mommy can tell whe he's lying, or apes can
understand the thinking of humans.
The point I was making was that you don't need a particularly high IQ to see
the error in the ways of the tyrants I listed above. Their actions were not
driven by logic or intelligence. We can be sure that a machine with an IQ of
300 would reach the same conclusion as somebody with an IQ of 100 applying
reason and sufficient knowledge of economics and human action, etc. The
machine can't come up with a better answer in this case than "don't invade
the rest of Europe".

Fact is, there aren't many problems that require super intelligence- the
bleeding edge of physics does, sure, but not most everyday problems. For
instance, the US government has just handed out $750B of pork. You don't
need a super IQ to figure out what was wrong with this; you just need a
reasonable understanding of economics and a motivation other than buying
votes and influence.

So, I suspect our super-AIs might be rather underemployed. The problems we
face aren't problems where nobody can get the right answer because they're
stupid, it's that most people don't want the right answer because it doesn't
benefit them. What hampers humanity is not insufficient intelligence, it's
our emotional drives. An AI in charge of the economy would be immediately
voted out of office and shut down.


Ian
Brad Templeton
2009-02-17 00:43:38 UTC
Permalink
Post by Ian B
Fact is, there aren't many problems that require super intelligence- the
bleeding edge of physics does, sure, but not most everyday problems. For
instance, the US government has just handed out $750B of pork. You don't
need a super IQ to figure out what was wrong with this; you just need a
reasonable understanding of economics and a motivation other than buying
votes and influence.
I can't agree. Again, as noted, we have super-intelligence next to a
chimp or dolphin. IF they could communicate better, how would you react
to their claim that "there aren't many problems that require super
intelligence." What you mean is, "there aren't many problems in the
government of humans that require super intelligence."
--
Analysis blog for Battlestar Galactica Fans -- http://ideas.4brad.com/battlestar
Ian B
2009-02-17 01:27:19 UTC
Permalink
Post by Brad Templeton
Post by Ian B
Fact is, there aren't many problems that require super intelligence-
the bleeding edge of physics does, sure, but not most everyday
problems. For instance, the US government has just handed out $750B
of pork. You don't need a super IQ to figure out what was wrong with
this; you just need a reasonable understanding of economics and a
motivation other than buying votes and influence.
I can't agree. Again, as noted, we have super-intelligence next to a
chimp or dolphin. IF they could communicate better, how would you
react to their claim that "there aren't many problems that require
super intelligence." What you mean is, "there aren't many problems
in the government of humans that require super intelligence."
The government of humans is the only government relevant to us. As I said,
the human brain is not lacking in the intellect necessary to solve any white
swan problem. What we lack is agreement about what we want. That's not
something super-intelligence can address, even if we could be sure what
super-intelligence is beyond some superior ability to solve crossword
puzzles.

We cannot put faith in greater logic, because our desires are not logical
and human life is not devoted to logic, nor should it be. We are all
individuals with different ideals and goals. The inability to gain many of
those goals is down to practical constraints, and to differences of opinion
between different people, which reduce to different desires. It's not an
intelligence thing.

Neither chimps nor dolphins have qualitatively, not quantitavely, the same
types of brains as us. Intelligence as we know it something unique to our
species. Dolphins cannot intrinsically understand the universe around them
in the way we can. They will never grasp what a differential equation is.
They are not a "lesser" intelligence. They are- in human terms- a non
intelligence. So we cannot analogise in the "a superintelligence is to a
human as a human is to a dolphin". We crossed a qualitative line at some
point in our evolution- perhaps when we acquired human language and the
brain structures associated with it. "More dolphin" wouldn't produce a
dolphin that can understand quantum mechanics, though it would probably have
a lot to say about fish.

As I said, humans have quite enough intelligence to do whatever we wish. So
it's not clear what use a superintelligence may be. It seems unlikely that
it will think thoughts beyond our comprehension. It would presumably just be
better at integrating larger quantities of information than a single human
brain can. It'll be very good at crosswords, but not a qualitatively
different crossword solver.


Ian
Brad Templeton
2009-02-17 01:32:20 UTC
Permalink
Post by Ian B
Neither chimps nor dolphins have qualitatively, not quantitavely, the same
types of brains as us. Intelligence as we know it something unique to our
Only 50 genes differ between us and the chimps. The difference is
quantitative more than anything. Bonobos watch our movies and
vaguely understand them. They have a creation myth. Not a complex one,
but they have thought about it.
Post by Ian B
As I said, humans have quite enough intelligence to do whatever we wish. So
it's not clear what use a superintelligence may be. It seems unlikely that
I don't believe we do, but I do believe it will never be clear to us
what a superintelligence may be. I don't believe there is any reason to
claim we are some pinnacle.
--
Analysis blog for Battlestar Galactica Fans -- http://ideas.4brad.com/battlestar
Ian B
2009-02-17 03:14:33 UTC
Permalink
Post by Brad Templeton
Post by Ian B
Neither chimps nor dolphins have qualitatively, not quantitavely,
the same types of brains as us. Intelligence as we know it something
unique to our
Only 50 genes differ between us and the chimps. The difference is
quantitative more than anything. Bonobos watch our movies and
vaguely understand them.
How do you know this? Do they enjoy the sex scenes most of all? Can they
answer questions on the plot afterwards?

They have a creation myth. Not a complex
Post by Brad Templeton
one, but they have thought about it.
Really? A story about how they came to be? A story, from creatures with no
language? Really?
Post by Brad Templeton
Post by Ian B
As I said, humans have quite enough intelligence to do whatever we
wish. So it's not clear what use a superintelligence may be. It
seems unlikely that
I don't believe we do, but I do believe it will never be clear to us
what a superintelligence may be.
Then you are talking without meaning. It's saying, "One day, somebody will
build a super thing!"

"Hey that sounds great, what will this super thing do?"

"I don't know. But it'll be super!"
Post by Brad Templeton
I don't believe there is any reason
to claim we are some pinnacle.
I didn't say that we are a pinnacle. I said that when we talk of
"intelligence" we mean that which we have, and no other species on Earth
has, and that is the only kind of intelligence that we can discuss; and I
assert that this is the one and only type of intelligence which can produce
complex behaviours required of a society such as ours. In other words, if
there are intelligent aliens out there, who can build radios and vehicles,
their intelligence will be qualitatively the same as ours.

Speculating here, I think the key is temporal ordering. I thought a lot
about this, and was reminded of it watching Anders, when my mother was dying
of cancer and the brain tumours had made her aphasic. It was quite painful
to watch on BSG this week, though Anders was not as severe and was portrayed
as having clear thoughts but disturbed speech. Trying to figure out how to
communicate with my mum, it seemed clear to me that the thoughts and the
speech were of a kind; her inability to order a sentence was the same
process as her inability to order thoughts, and I speculated that they are
the same "unit" within the brain. As a kind of mechanistic model, imagine a
"serialising unit" which organises the thougths into a temporal order.
Imagine that it can be routed to speech, or not. When we are speaking, the
output of said unit is connected to our vocal apparatus, when we are
thinking but not speaking, it isn't. But it's the same unit both speaking
and thinking.

For instance, I am typing sentences right now. But I don't sit and think
about a sentence, then type it. I just type, and out it comes and, as I
concentrate on what I'm thinking about, I realise I am thinking the words as
well, which is how thinking about things works for me; I think in speech, as
a monologue. I can switch it to various output devices (my vocal chords, my
fingers on a keyboard), and then out it comes, voila. When we speak (without
planning) we are literally thinking out loud.

I think this is the crucial element of the experience of being a thinking
machine. It is this serial stream of thought. This temporal ordering creates
a perception of time; and once time is perceived real planning can occur. I
plan what to do tomorrow afternoon. My cat can't; she lives in the now and
has no perception of the future. She cannot conceive that she even *has* a
future. We can imagine a feedback loop from the serial output, back into
those parts of the brain from which the raw thoughts emanate; thus creating
a perception of *self* and the inability to ponder what has just been
thought- i.e. what has passed through our hypothetical serialising unit.
This is self awareness. If the serialiser is compromised, the stream of
consciousness is disordered, and becomes ever moreso as it is fed back to
its input; going back to my experience with my mother again, we were
painfully aware of this as a sentence would start with some kind of form and
descend into ever more incoherent gibberish. I thus declare my hypothesis
that this key brain organ is in Wernicke's Area. And I think that only we
can do this, which is why only we can think logically, and only we can
speak.

Just waffling. Word salad...


Ian
Brad Templeton
2009-02-17 03:18:10 UTC
Permalink
Post by Ian B
Post by Brad Templeton
Post by Ian B
Neither chimps nor dolphins have qualitatively, not quantitavely,
the same types of brains as us. Intelligence as we know it something
unique to our
Only 50 genes differ between us and the chimps. The difference is
quantitative more than anything. Bonobos watch our movies and
vaguely understand them.
How do you know this? Do they enjoy the sex scenes most of all? Can they
answer questions on the plot afterwards?
It's not complex. A friend told me some stories from a Bonobo
institute he was at. You can read some of his story recounted
here:

http://books.google.com/books?id=nm3x17pcmJkC&pg=RA3-PA375&lpg=RA3-PA375&dq=amory+lovins+bonobos&source=bl&ots=qsFG-WeL5O&sig=_doB2Mny9lzOAl2qNFQV0S9V0-4&hl=en&ei=vSuaScblB5m0sQO4hbyEAQ&sa=X&oi=book_result&resnum=3&ct=result
Post by Ian B
They have a creation myth. Not a complex
Post by Brad Templeton
one, but they have thought about it.
Really? A story about how they came to be? A story, from creatures with no
language? Really?
They have a decent language of their own, and have come to understand
several thousand of our words. They can't speak our words, but they
can hear them, and type them.
Post by Ian B
"Hey that sounds great, what will this super thing do?"
"I don't know. But it'll be super!"
No, I say we won't be able to understand it. We'll understand aspects
of it, to the extent that we can.

But like I said, I don't want to get too far into this. I do this
all the time, and there's lots of stuff out there folks can read
on the subject if they want to examine it. Whole books.
--
Brad Templeton's gallery of giant panormamic photos from around the world...
http://www.templetons.com/brad/pano
Ian B
2009-02-17 03:34:51 UTC
Permalink
Post by Brad Templeton
Post by Ian B
Post by Brad Templeton
Post by Ian B
Neither chimps nor dolphins have qualitatively, not quantitavely,
the same types of brains as us. Intelligence as we know it
something unique to our
Only 50 genes differ between us and the chimps. The difference is
quantitative more than anything. Bonobos watch our movies and
vaguely understand them.
How do you know this? Do they enjoy the sex scenes most of all? Can
they answer questions on the plot afterwards?
It's not complex. A friend told me some stories from a Bonobo
institute he was at. You can read some of his story recounted
http://books.google.com/books?id=nm3x17pcmJkC&pg=RA3-PA375&lpg=RA3-PA375&dq=amory+lovins+bonobos&source=bl&ots=qsFG-WeL5O&sig=_doB2Mny9lzOAl2qNFQV0S9V0-4&hl=en&ei=vSuaScblB5m0sQO4hbyEAQ&sa=X&oi=book_result&resnum=3&ct=result
The link takes me to a book called "The Necessary Revolution" which appears
to be a bunch of greenie fanwank. Was that intended?
Post by Brad Templeton
Post by Ian B
They have a creation myth. Not a complex
Post by Brad Templeton
one, but they have thought about it.
Really? A story about how they came to be? A story, from creatures
with no language? Really?
They have a decent language of their own, and have come to understand
several thousand of our words. They can't speak our words, but they
can hear them, and type them.
Cite? I hate saying that, but what I've read of experiments with chimps
doesn't equate to anything like that. Let alone a "Creation Myth".
Post by Brad Templeton
Post by Ian B
"Hey that sounds great, what will this super thing do?"
"I don't know. But it'll be super!"
No, I say we won't be able to understand it. We'll understand aspects
of it, to the extent that we can.
And I'm saying that you're making a meaningless prediction. You're
describing an imaginary machine and imbuing it with qualities you cannot
predict. It doesn't exist, you don't know if it can exist. It's like
predicting a warp drive, then discussing its properties. Nobody knows how to
build a warp drive or how it might work. As such, you simply cannot predict
anything about it.
Post by Brad Templeton
But like I said, I don't want to get too far into this. I do this
all the time, and there's lots of stuff out there folks can read
on the subject if they want to examine it. Whole books.
Yes, I've read some books too. The key thing to remember about artificial
intelligence is that it is entirely unrealised as of 2009 and as such, every
speculation is purely that; speculation. Nobody has even made a start yet,
though a lot of useful study of blind alleys has disproved a lot of initial
assumptions. The only machine intelligence in existence currently is that
which has evolved- and the organising principles upon which that works are
still a mystery.


Ian
DaffyDuck
2009-02-17 05:27:04 UTC
Permalink
Post by Ian B
Post by Brad Templeton
http://books.google.com/books?id=nm3x17pcmJkC&pg=RA3-PA375&lpg=RA3-PA375&dq=amory+lovins+bonobos&source=bl&ots=qsFG-WeL5O&sig=_doB2Mny9lzOAl2qNFQV0S9V0-4&hl=en&ei=vSuaScblB5m0sQO4hbyEAQ&sa=X&oi=book_result&resnum=3&ct=result
The
Post by Ian B
link takes me to a book called "The Necessary Revolution" which appears
to be a bunch of greenie fanwank. Was that intended?
Final 4 paragraphs. Quite amusing story.
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
Ian B
2009-02-17 14:46:05 UTC
Permalink
Post by DaffyDuck
Post by Ian B
Post by Brad Templeton
http://books.google.com/books?id=nm3x17pcmJkC&pg=RA3-PA375&lpg=RA3-PA375&dq=amory+lovins+bonobos&source=bl&ots=qsFG-WeL5O&sig=_doB2Mny9lzOAl2qNFQV0S9V0-4&hl=en&ei=vSuaScblB5m0sQO4hbyEAQ&sa=X&oi=book_result&resnum=3&ct=result
The
Post by Ian B
link takes me to a book called "The Necessary Revolution" which
appears to be a bunch of greenie fanwank. Was that intended?
Final 4 paragraphs. Quite amusing story.
I don't get it. The book has no preview available, so I can't look inside
it. The final 4 paragraphs of what?


Ian
DaffyDuck
2009-02-17 21:30:34 UTC
Permalink
Post by Ian B
Post by DaffyDuck
Post by Ian B
Post by Brad Templeton
http://books.google.com/books?id=nm3x17pcmJkC&pg=RA3-PA375&lpg=RA3-PA375&dq=amory+lovins+bonobos&source=bl&ots=qsFG-WeL5O&sig=_doB2Mny9lzOAl2qNFQV0S9V0-4&hl=en&ei=vSuaScblB5m0sQO4hbyEAQ&sa=X&oi=book_result&resnum=3&ct=result
The

link
Post by Ian B
Post by DaffyDuck
Post by Ian B
takes me to a book called "The Necessary Revolution" which
appears to be a bunch of greenie fanwank. Was that intended?
Final 4 paragraphs. Quite amusing story.
I don't get it. The book has no preview available, so I can't look inside
it. The final 4 paragraphs of what?
Ian
(rolls eyes)

The book *HAS* preview available, and that's what I am reading.

The provided link takes you directly to the proper page and chapter of
the book, that he was refering to.

In fact a much shorter URL to get to the same location (I assume Brad
is kinda new to all of this posting Amazon URLs thing):

http://books.google.com/books?id=nm3x17pcmJkC&pg=RA3-PA375

You can also go here:

http://books.google.com/books?id=nm3x17pcmJkC

and click on [PREVIEW THIS BOOK] and go to page 375.
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
Ian B
2009-02-17 21:49:26 UTC
Permalink
Post by DaffyDuck
Post by Ian B
Post by DaffyDuck
Post by Ian B
Post by Brad Templeton
http://books.google.com/books?id=nm3x17pcmJkC&pg=RA3-PA375&lpg=RA3-PA375&dq=amory+lovins+bonobos&source=bl&ots=qsFG-WeL5O&sig=_doB2Mny9lzOAl2qNFQV0S9V0-4&hl=en&ei=vSuaScblB5m0sQO4hbyEAQ&sa=X&oi=book_result&resnum=3&ct=result
The
link
Post by Ian B
Post by DaffyDuck
Post by Ian B
takes me to a book called "The Necessary Revolution" which
appears to be a bunch of greenie fanwank. Was that intended?
Final 4 paragraphs. Quite amusing story.
I don't get it. The book has no preview available, so I can't look
inside it. The final 4 paragraphs of what?
Ian
(rolls eyes)
The book *HAS* preview available, and that's what I am reading.
The provided link takes you directly to the proper page and chapter of
the book, that he was refering to.
In fact a much shorter URL to get to the same location (I assume Brad
http://books.google.com/books?id=nm3x17pcmJkC&pg=RA3-PA375
http://books.google.com/books?id=nm3x17pcmJkC
and click on [PREVIEW THIS BOOK] and go to page 375.
Ah, the preview isn't available here in the third world (I'm in the former
UK) so I had to go through a free proxy server.

Frankly, I am skeptical. I'd like to see the research papers documenting
this remarkable research. The disclaimer in the text that "nobody hears of
this outside the Bubble" is suspicious. I am particularly skeptical that
bonobos can watch Field Of Dreams and extract from it the complex message
"build it and they will come". Understanding this requires a very advanced
intellect. If they can do this, they ought to be doing far more than they
do. They should be writing books, or at least pamphlets, on philosophy. If
they really can consider and communicate such complex ideas, we should be
talking to them about a lot more than what colour they want the logs in the
bedroom. And, if they have this level of intellect, then it is certainly a
crime locking them up in research facilities, however nicely designed.

And that is the point of the story in the book- this is an environmentalist
book which immediately follows the story with the heavy moral of animal
rights. The authors are free to do that of course, but they're making
extraordinary claims that these bonobos effectively have human level
intellect- not as much as a normal adult human perhaps, but certainly more
than a Celine Dion fan.

I want to see the published papers. Until then, no sale.


Ian
Brad Templeton
2009-02-17 22:14:06 UTC
Permalink
Post by Ian B
Frankly, I am skeptical. I'd like to see the research papers documenting
this remarkable research. The disclaimer in the text that "nobody hears of
this outside the Bubble" is suspicious. I am particularly skeptical that
bonobos can watch Field Of Dreams and extract from it the complex message
"build it and they will come". Understanding this requires a very advanced
intellect. If they can do this, they ought to be doing far more than they
do. They should be writing books, or at least pamphlets, on philosophy. If
they really can consider and communicate such complex ideas, we should be
talking to them about a lot more than what colour they want the logs in the
bedroom. And, if they have this level of intellect, then it is certainly a
crime locking them up in research facilities, however nicely designed.
And that is the point of the story in the book- this is an environmentalist
book which immediately follows the story with the heavy moral of animal
rights. The authors are free to do that of course, but they're making
extraordinary claims that these bonobos effectively have human level
intellect- not as much as a normal adult human perhaps, but certainly more
than a Celine Dion fan.
I want to see the published papers. Until then, no sale.
Up to you. Lovins told me the story personally, and some of it is
recounted in a variety of other sources. We were all quite surprised
to hear the story too, but believed it because we got it directly
from him. I expect the research will come out. Of particular
interest was the Bonobish language, used in Africa. Bonobos who
are raised by humans learn a subset of human languages (which they
must type, they cannot speak them) but different Bonobos taken
from close locations in Africa can communicate in their own language,
so you can get a bilingual translator Bonobo and say something in English and
he can say it in Bonobish to the new arrival. We can't make their
sounds with our mouths either, I don't know how much work has been done
to computer generate them. Their language is quite simple, without
complex sentence structure, as I understand it. They actually gain
more facility when taught a human language, as is also true of humans.

Still, we are really not that seperate from the Bonobos. We split off
about 6 million years ago, about 300,000 generations, which is pretty
short evolution wise, and only able to introduce 300,000 bits of
information into our DNA. They have also evolved for 6 million years,
but with different pressures. They split off from the chimps much
more recently, and in an environment that seems to have been more
conducive to development of culture.
--
Giant Burning Man Panoramas
http://www.templetons.com/brad/burn.html
Ian B
2009-02-17 22:30:24 UTC
Permalink
Post by Brad Templeton
Post by Ian B
Frankly, I am skeptical. I'd like to see the research papers
documenting this remarkable research. The disclaimer in the text
that "nobody hears of this outside the Bubble" is suspicious. I am
particularly skeptical that bonobos can watch Field Of Dreams and
extract from it the complex message "build it and they will come".
Understanding this requires a very advanced intellect. If they can
do this, they ought to be doing far more than they do. They should
be writing books, or at least pamphlets, on philosophy. If they
really can consider and communicate such complex ideas, we should be
talking to them about a lot more than what colour they want the logs
in the bedroom. And, if they have this level of intellect, then it
is certainly a crime locking them up in research facilities, however
nicely designed.
And that is the point of the story in the book- this is an
environmentalist book which immediately follows the story with the
heavy moral of animal rights. The authors are free to do that of
course, but they're making extraordinary claims that these bonobos
effectively have human level intellect- not as much as a normal
adult human perhaps, but certainly more than a Celine Dion fan.
I want to see the published papers. Until then, no sale.
Up to you. Lovins told me the story personally, and some of it is
recounted in a variety of other sources. We were all quite surprised
to hear the story too, but believed it because we got it directly
from him. I expect the research will come out. Of particular
interest was the Bonobish language, used in Africa. Bonobos who
are raised by humans learn a subset of human languages (which they
must type, they cannot speak them) but different Bonobos taken
from close locations in Africa can communicate in their own language,
so you can get a bilingual translator Bonobo and say something in
English and he can say it in Bonobish to the new arrival. We can't
make their
sounds with our mouths either, I don't know how much work has been done
to computer generate them. Their language is quite simple, without
complex sentence structure, as I understand it. They actually gain
more facility when taught a human language, as is also true of humans.
You haven't addressed the astonishing claim that they could understand the
complex philosophy of "Build it and they will come" *and* apply it as a
generalisation to their own lives. That's remarkable. Anyone who has proof
of that would rush to publication. It's an astonishing result. Your friend
may believe it, but then the easiest person to fool is oneself. That's why
science doesn't deal in anecdotes.

On the matter of language, it is worth noting that the call/gesture systems
of primates are mediated by the midbrain, as are ours (in us, laughter,
screaming etc). Our language is mediated by the neocortex. Bonobos show some
hemispheric differentiation in the area where we have Wernicke's area, but
it's slight and far more primitive than our own. It suggests strongly that
language as we know it is not just an advanced call/gesture system but
something quite distinct. As such, I would suggest that a call system with
some distinct "words" for things is not qualitatively the same as language;
what is going on in their brains is not the same as ours.
Post by Brad Templeton
Still, we are really not that seperate from the Bonobos. We split off
about 6 million years ago, about 300,000 generations, which is pretty
short evolution wise, and only able to introduce 300,000 bits of
You can't measure DNA in bits. It's an analogue, self referential system,
not data.
Post by Brad Templeton
information into our DNA. They have also evolved for 6 million years,
but with different pressures. They split off from the chimps much
more recently, and in an environment that seems to have been more
conducive to development of culture.
I want to know what stories they tell each other. Also, their creation myth.


Ian
Brad Templeton
2009-02-17 23:57:20 UTC
Permalink
Post by Ian B
something quite distinct. As such, I would suggest that a call system with
some distinct "words" for things is not qualitatively the same as language;
what is going on in their brains is not the same as ours.
You may or may not accept the anecdotes about the apes. This is a
sidebar though to the central question -- what evidence is there that we
are at some plateau? (Other than the fact that as we have gotten
social, and stopped letting nature cull our less "fit" members, our
evolution now takes a different turn.)
Post by Ian B
Post by Brad Templeton
Still, we are really not that seperate from the Bonobos. We split off
about 6 million years ago, about 300,000 generations, which is pretty
short evolution wise, and only able to introduce 300,000 bits of
You can't measure DNA in bits. It's an analogue, self referential system,
not data.
It's not bits of DNA, it's bits of entropy in the entire system. At
best, 2 people having one child can produce 1 bit of useful information
about whether the genome of that child is more or less able to survive
than other combinations of DNA. That is all evolution cares about --
does this gene line do better or worse at surviving than the mean.

(You can get more than one bit if you spread your descendents out into
different environments with different survival criteria, but this is
rare.)
--
Giant Burning Man Panoramas
http://www.templetons.com/brad/burn.html
Anthony Buckland
2009-02-18 00:39:04 UTC
Permalink
Post by Brad Templeton
...
It's not bits of DNA, it's bits of entropy in the entire system. At
best, 2 people having one child can produce 1 bit of useful information
about whether the genome of that child is more or less able to survive
than other combinations of DNA. That is all evolution cares about --
does this gene line do better or worse at surviving than the mean.
...
Biological evolution, yes. Possibly because of the modern
revival of the mid-19th-Century religious war over biological
evolution, a lot of people focus this way. But even in the
19th Century, general evolution was a hot philosophical
topic. Next after biological comes social, far faster than
biological and our nominal specialty. Until you take account
of spiritual evolution, which is the main point, behind the mass
slaughter and suffering, of BG.
Ian B
2009-02-18 00:42:39 UTC
Permalink
Post by Brad Templeton
Post by Ian B
something quite distinct. As such, I would suggest that a call
system with some distinct "words" for things is not qualitatively
the same as language; what is going on in their brains is not the
same as ours.
You may or may not accept the anecdotes about the apes.
Well as you can tell, I don't. They're really no better than that guy who
built an anti-gravity machine then claimed he lost his notes in a fire.
Extraordinary claims... extraordinary evidence, type of thing. Really, I
don't think you've considered the sheer remarkability of the claim that
these bonobos understood a complex philosophical message from a movie. If
they'd showed approval of movies with people running about in them, that
would be easier to swallow. "Build it and they will come"? Come on, Brad.
Post by Brad Templeton
This is a
sidebar though to the central question -- what evidence is there that
we are at some plateau? (Other than the fact that as we have gotten
social, and stopped letting nature cull our less "fit" members, our
evolution now takes a different turn.)
As I said, geniuses don't seem to be strongly selected for. You can fit much
more genius into a skull than the modal value of the human race.

Anyway, that wasn't quite my point about the plateau. I was saying that
there is a steadily diminishing return in problem solving for increasing IQ,
since the number of intractable problems steadily reduces in inverse
proportion, probably as some power law. We could certainly use an advanced
intelligence for technical problem solving, but I dispute that a very high
IQ would have some profoundly different philosophical perspective. It would
be no use asking it to solve all the world and society's problems, because
those answers are tractable to normal human intelligence levels.
Post by Brad Templeton
Post by Ian B
Post by Brad Templeton
Still, we are really not that seperate from the Bonobos. We split
off about 6 million years ago, about 300,000 generations, which is
pretty short evolution wise, and only able to introduce 300,000
bits of
You can't measure DNA in bits. It's an analogue, self referential
system, not data.
It's not bits of DNA, it's bits of entropy in the entire system. At
best, 2 people having one child can produce 1 bit of useful
information about whether the genome of that child is more or less
able to survive than other combinations of DNA. That is all
evolution cares about -- does this gene line do better or worse at
surviving than the mean.
(You can get more than one bit if you spread your descendents out into
different environments with different survival criteria, but this is
rare.)
I'm not entirely clear how useful this metric is. Why aren't you multiplying
your quantity of data by the number of human beings who have lived, rather
than by the number of generations, for instance?


Ian
OM
2009-02-18 02:28:02 UTC
Permalink
Post by Ian B
Well as you can tell, I don't. They're really no better than that guy who
built an anti-gravity machine then claimed he lost his notes in a fire.
...Cue Tony Lance and his "Big Bertha" trolling :-P

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
DaffyDuck
2009-02-18 02:32:08 UTC
Permalink
Post by Ian B
Ah, the preview isn't available here in the third world (I'm in the former
UK) so I had to go through a free proxy server.
Makes sense - the wonders of copyright laws....

If you had further troubles, I was going to look for a PDF bootleg of
it, to copy the relevant portions of the text into my next reply - as
well as making a statement about the futility of such legal
grandstanding :-)
Post by Ian B
Ah, the preview isn't available here in the third world (I'm in the former
UK) so I had to go through a free proxy server.
Frankly, I am skeptical. I'd like to see the research papers documenting
this remarkable research. The disclaimer in the text that "nobody hears of
this outside the Bubble" is suspicious. I am particularly skeptical that
bonobos can watch Field Of Dreams and extract from it the complex message
"build it and they will come". Understanding this requires a very advanced
intellect. If they can do this, they ought to be doing far more than they
do. They should be writing books, or at least pamphlets, on philosophy. If
they really can consider and communicate such complex ideas, we should be
talking to them about a lot more than what colour they want the logs in the
bedroom. And, if they have this level of intellect, then it is certainly a
crime locking them up in research facilities, however nicely designed.
And that is the point of the story in the book- this is an environmentalist
book which immediately follows the story with the heavy moral of animal
rights. The authors are free to do that of course, but they're making
extraordinary claims that these bonobos effectively have human level
intellect- not as much as a normal adult human perhaps, but certainly more
than a Celine Dion fan.
I want to see the published papers. Until then, no sale.
Hence, my earlier qualification of "Quite amusing story."

Emphasis on 'amusing' and 'story'.

This may shed more light on this:

http://www.usatoday.com/tech/science/discoveries/2007-05-10-bonobo-studies_N.htm

http://itre.cis.upenn.edu/~myl/languagelog/archives/004558.html

This

appears no different than the linguistic abilities of Koko, or other
simians understanding, and able to communicate, in rudimentary language
fragments.

The journalists are deliberately engaging in lots of ethnocentrism,
providing the translations into 'macchiatto frappucino' for what may
just be 'sweet drinking with foamy toap' (for example).

While always impressive, it appears to be a FAR CRY from what Brad
implies, or rather the typical Braddism of stating something as true,
regardless of associated facts...
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
OM
2009-02-17 03:46:54 UTC
Permalink
Post by Ian B
Do they enjoy the sex scenes most of all?
...Depends on which researcher you choose to believe. One of the old
regulars on sci.space.history used to refer to bonobos getting
erections watching porn, but if you ask anyone who works at a zoo
they'll either avoid the question or tell you it's bullshit. There's
quite a few biologists and behavioral researchers who'll tell you that
only humans and *maybe* dolphins(*) are the only species where emotion
has a major influence or motivational effect upon mating. The only
problem is that none of their theories do a good job of explaining
non-human species - such as parakeets - where monogamy-for-life
occurs.

(*) There's some evidence that male dolphins run in packs, and when
they come across a female who will *not* put out, they'll slap her
around with their tails until she capitulates and "spreads" - or
whatever it is dolphins do - for the first male who propositioned.
Then the rest of the pack will take their turn, depending on how much
the female continues to resist. It's only been seen a few times in the
wild, but it was a rather interesting read. Wish I had the damn link
to quote here....

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
OM
2009-02-17 03:47:17 UTC
Permalink
Post by Ian B
"Hey that sounds great, what will this super thing do?"
...There's a DEVO song about that.

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
DaffyDuck
2009-02-17 05:29:29 UTC
Permalink
Post by Ian B
Post by Brad Templeton
Post by Ian B
Neither chimps nor dolphins have qualitatively, not quantitavely,
the same types of brains as us. Intelligence as we know it something
unique to our
Only 50 genes differ between us and the chimps. The difference is
quantitative more than anything. Bonobos watch our movies and
vaguely understand them.
How do you know this? Do they enjoy the sex scenes most of all? Can they
answer questions on the plot afterwards?
Doesn't it all make sense, now?
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
OM
2009-02-17 02:56:36 UTC
Permalink
Post by Ian B
They will never grasp what a differential equation is.
They are not a "lesser" intelligence.
...This goes back to the more recent educational theories as to why so
many people have problems with first year calculus. They can't
visualize what an equation actually *does*, much less what it's
supposed to represent. However, with computer animation being able to
graphically show what an equation is all about, the number of students
passing their first calculus course on the average has increased
significantly over the past decade. Significant enough that many
colleges are actually concerned because they used the freshmen
calculus courses to weed out students as part of the scam to
artificially inflate the value of the degree by reducing the number of
graduates.

...How this applies to dolphins is that, having only fins, they're not
exactly in the best position to utilize what happens when the limit
approaches zero. Maybe with their bottle noses, but not in the same
way we do with our hands and feet.

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
Ian B
2009-02-17 03:18:20 UTC
Permalink
Post by OM
Post by Ian B
They will never grasp what a differential equation is.
They are not a "lesser" intelligence.
...This goes back to the more recent educational theories as to why so
many people have problems with first year calculus. They can't
visualize what an equation actually *does*, much less what it's
supposed to represent. However, with computer animation being able to
graphically show what an equation is all about, the number of students
passing their first calculus course on the average has increased
significantly over the past decade. Significant enough that many
colleges are actually concerned because they used the freshmen
calculus courses to weed out students as part of the scam to
artificially inflate the value of the degree by reducing the number of
graduates.
Interesting. I couldn't get my head around calculus at A Level, and I blame
part of that on my teacher, who started the course with "When I was at
school, they wasted a lot of time explaining why all this stuff works. I'm
not going to do that, I'm just going to tell you how to do it to pass the
exam". Seriously. The result was I could differentiate and integrate some
stuff in a robotic way but had no real clue what I was doing. And failed the
A Level.

Some time later, I bought a book on the subject and understood it in a week.

I'm not a big fan of formal education btw.


Ian
OM
2009-02-17 03:40:57 UTC
Permalink
Post by Ian B
Interesting. I couldn't get my head around calculus at A Level, and I blame
part of that on my teacher, who started the course with "When I was at
school, they wasted a lot of time explaining why all this stuff works. I'm
not going to do that, I'm just going to tell you how to do it to pass the
exam". Seriously. The result was I could differentiate and integrate some
stuff in a robotic way but had no real clue what I was doing. And failed the
A Level.
...The first time I took calculus - and if anyone who attended Texas
U. in the 80's who had a schmuck named McAdam will recall this crap -
the dipshit teaching it not only took that approach, he had a "no
partial credit' policy. You either got it all right, or the slightest
mistake was considered wrong. He figured that basic calculus was so
easy that anyone could do it, hence no leniency *or* real
explanations.
Post by Ian B
Some time later, I bought a book on the subject and understood it in a week.
...The second time I took it, I had a different prof and a really
*good* TA who was going to be an educator, and both expressed concerns
to their classes up front about what they felt were the biggest
problems in teaching calculus. About three weeks into the semester, I
got the crazy idea to start taking each equation we were doing for
homework and seeing how it graphed in three dimensions where
applicable. I'd done up some FORTRAN code to do this for another class
a couple of years back, and ported it over to first some BASIC and
then some Pascal code, and showed myself for the first tim what the
frack actually happens when "the limit reaches zero". This statement
is one of the most basic tenets of calculus, but you'd be surprised
how many math profs never explain just what it means!

The good news was that the prof and the TA loved the idea. The bad
news was that when I did this, computers that could do real-time
graphic displays of those limits approaching zero were a few years
off. It could be done by knitting still frames together in theory, but
unless you were on a mainframe running something like MOVIE.BYU, there
weren't that many students with the hardware to display 3D animations
of even the most basic equations....

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
DaffyDuck
2009-02-17 05:21:18 UTC
Permalink
Post by Ian B
I'm not a big fan of formal education btw.
Who would be?
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
DaffyDuck
2009-02-17 05:20:33 UTC
Permalink
Post by OM
Significant enough that many
colleges are actually concerned because they used the freshmen
calculus courses to weed out students as part of the scam to
artificially inflate the value of the degree by reducing the number of
graduates.
Everything's a scam to you, isn't it?
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
OM
2009-02-17 06:23:01 UTC
Permalink
Post by DaffyDuck
Post by OM
Significant enough that many
colleges are actually concerned because they used the freshmen
calculus courses to weed out students as part of the scam to
artificially inflate the value of the degree by reducing the number of
graduates.
Everything's a scam to you, isn't it?
...Having worked both sides of the academic fence, Daffs, I can verify
that's how the college level education system works. It's the truth,
sickening as it may be.

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
DaffyDuck
2009-02-17 07:19:39 UTC
Permalink
Post by OM
Post by DaffyDuck
Post by OM
Significant enough that many
colleges are actually concerned because they used the freshmen
calculus courses to weed out students as part of the scam to
artificially inflate the value of the degree by reducing the number of
graduates.
Everything's a scam to you, isn't it?
...Having worked both sides of the academic fence, Daffs, I can verify
that's how the college level education system works. It's the truth,
sickening as it may be.
I never assume conspirational intent, when incompetence is a far easier
explanation.
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
Ryan P
2009-02-17 17:00:55 UTC
Permalink
Post by Ian B
Post by Brad Templeton
Post by Ian B
An AI is intelligent. If it's super-intelligent, it would presumably
be able to figure out the futility of the actions of the people
listed above. The
It will do better than we do, but we can not easily write about its
actions. We can understand them on the level a 3 year old
understands why it is mommy can tell whe he's lying, or apes can
understand the thinking of humans.
The point I was making was that you don't need a particularly high IQ to see
the error in the ways of the tyrants I listed above. Their actions were not
driven by logic or intelligence. We can be sure that a machine with an IQ of
300 would reach the same conclusion as somebody with an IQ of 100 applying
reason and sufficient knowledge of economics and human action, etc. The
machine can't come up with a better answer in this case than "don't invade
the rest of Europe".
I disagree SOMEWHAT. It makes logical sense that the world would be
more peaceful if everyone were The Same. For example, Middle Eastern
terrorists are "accurate" in their thinking that if they kill all the
non Muslims in the world, and every remaining Muslim subscribes to the
same particular sect, there will be no more need for wars and suicide
attacks. Everyone will be at peace because there's nothing to fight
"against." Of course, killing all those who aren't Muslim isn't morally
acceptable. Hitler, Stalin, and the rest are no different. If they had
actually succeeded in their goals of making everyone "The Same," (as in
accepting of their rule) there would be no further violence necessary.

I don't see why a sentient AI would come to a different conclusion...
if you eliminate all resistance to whatever your goals are
(environmental protection, safety of your nation, survival of the garter
snake, etc), you automatically achieve your goals. Unfortunately, once
you eliminate the first person, then your creators would likely try to
stop you, which automatically places your handlers in the "resistance"
category.

We should just not allow sentient AI's. :)
Ian B
2009-02-17 17:18:40 UTC
Permalink
Post by Ryan P
Post by Ian B
Post by Brad Templeton
Post by Ian B
An AI is intelligent. If it's super-intelligent, it would
presumably be able to figure out the futility of the actions of
the people listed above. The
It will do better than we do, but we can not easily write about its
actions. We can understand them on the level a 3 year old
understands why it is mommy can tell whe he's lying, or apes can
understand the thinking of humans.
The point I was making was that you don't need a particularly high
IQ to see the error in the ways of the tyrants I listed above. Their
actions were not driven by logic or intelligence. We can be sure
that a machine with an IQ of 300 would reach the same conclusion as
somebody with an IQ of 100 applying reason and sufficient knowledge
of economics and human action, etc. The machine can't come up with a
better answer in this case than "don't invade the rest of Europe".
I disagree SOMEWHAT. It makes logical sense that the world would be
more peaceful if everyone were The Same. For example, Middle Eastern
terrorists are "accurate" in their thinking that if they kill all the
non Muslims in the world, and every remaining Muslim subscribes to the
same particular sect, there will be no more need for wars and suicide
attacks. Everyone will be at peace because there's nothing to fight
"against." Of course, killing all those who aren't Muslim isn't
morally acceptable. Hitler, Stalin, and the rest are no different. If
they had actually succeeded in their goals of making everyone "The
Same," (as in accepting of their rule) there would be no further
violence necessary.
The problem there is that you can't make everyone the same. Every utopian
plan runs aground on that same beach. You can impose the same rules on
everybody, but you can't make them all actually think the same. Yet all the
great collectivist societal plans- from theocracy to communism to fascism to
the latest fetish called various stuff such as "social democracy" are based
upon this idea. The soviets thought they could create "soviet man" for
instance, who would naturally believe in soviet society. Couldn't be done.

The utopians always presume that they haven't suceeded yet due to a lack of
"education" which is a euphemism for strict indoctrination. Since they
believe that their way is the correct way, and only correct way, they
presume all they have to do is to explain/indoctrinate sufficiently, and
then everyone will see the light. But human nature gets in the way. All
these utopian plans are, fundamentally, just plain wrong (or not even
wrong). So the next presumption is that those who disagree and resist must
be just plain evil. And then the persecution and violence starts.

The world might be peaceful, though extremely dull (see Star Trek for an
example, whose "great philosophy" is that same monolithic utopianism- post
TOS, all the characters are soviet man made flesh) if everyone were the
same. But everyone isn't the same, and doesn't want to be. So it's the same
as a plan based on everybody being over six foot tall. It can't work,
because short people exist, and stretching them on a rack isn't going to
solve that fundamental problem. You end up having to shoot all the short
people.

So, if our intelligence is really intelligence, it wouldn't attempt such a
plan, because the plan is just plain stupid. It requires constant waste of
resources supervising the society and weeding out the dissenters, which
isn't rational. An intelligence would recognise that the society would do
better in all ways letting people be what they wish to be, and use their
productive talents as they see fit.

Its the costs of imposing homogeneity as opposed to reaping the benefits of
diversity.


Ian
Ian B
2009-02-17 21:33:39 UTC
Permalink
Post by Ian B
The problem there is that you can't make everyone the same. Every
utopian plan runs aground on that same beach. You can impose the
same rules on everybody, but you can't make them all actually think
the same. Yet all the great collectivist societal plans- from
theocracy to communism to fascism to the latest fetish called
various stuff such as "social democracy" are based upon this idea.
The soviets thought they could create "soviet man" for instance, who
would naturally believe in soviet society. Couldn't be done.
Exactly the point... That would be why we should fear a sentient
AI... they would come to that same conclusion, and determine that the
world would be better of with machines in charge that WILL think the
same, rather than allow unreliable humans to govern.
It depends on what its goals are; what its definition of "best" is.

You can measure "best" in lots of different ways, and really differing human
philosophies come down to those different definitions of "best". You can
measure prosperity for instance, but then do you measure the total
prosperity of the society (GDP) or the disposition of that prosperity (gap
between rich and poor)? Some people think a society which dominates others
is more successful, while others think a society is successful if it is at
peace with its neighbours. You could measure population. Some people (myself
included) measure success by the freedom of the citizenry. Some people are
obsessed with increasing the average lifespan, others with maximising
pleasure (a long grim life versus a short happy one). There is no objective
way, however intelligent you are, to decide which "best" is best.

So it comes down to what emotional drives our AI has. And we must be clear
about this; a thinking machine must have emotional drives. It must have
purpose, a reason for getting up in the morning, or it won't care to do
anything at all. Our desire to preserve ourselves, to seek pleasure and
avoid pain, these things are what keep us going. It is from these raw
emotions that our higher intellectual goals descend, modified by our
perceptions and experiences. An AI won't be an expert system you switch on
and ask a question, passively waiting to react. It will think all the time,
just as we do. It will think about what it cares about, and what it cares
about will be ultimately the result of what motivations were deep coded into
it by its designers.

The answers we get will thus presumably be the answers its designers wanted
to get.


Ian
DaffyDuck
2009-02-18 02:33:06 UTC
Permalink
Post by Ian B
The answers we get will thus presumably be the answers its designers wanted
to get.
Unless its designers designed true Free Will into its DNA...
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
Ian B
2009-02-17 21:35:08 UTC
Permalink
Post by Ian B
Its the costs of imposing homogeneity as opposed to reaping the
benefits of diversity.
A price China will ultimately end up paying...
If you'd seen the maoist plans for nationalisation of childhood currently
underway here in Britain, you'd say the same about us.
DaffyDuck
2009-02-17 21:32:15 UTC
Permalink
Post by Ian B
Its the costs of imposing homogeneity as opposed to reaping the benefits of
diversity.
A price China will ultimately end up paying...
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
Ryan P
2009-02-17 20:47:14 UTC
Permalink
Post by Ian B
The problem there is that you can't make everyone the same. Every utopian
plan runs aground on that same beach. You can impose the same rules on
everybody, but you can't make them all actually think the same. Yet all the
great collectivist societal plans- from theocracy to communism to fascism to
the latest fetish called various stuff such as "social democracy" are based
upon this idea. The soviets thought they could create "soviet man" for
instance, who would naturally believe in soviet society. Couldn't be done.
Exactly the point... That would be why we should fear a sentient
AI... they would come to that same conclusion, and determine that the
world would be better of with machines in charge that WILL think the
same, rather than allow unreliable humans to govern.
OM
2009-02-17 20:28:49 UTC
Permalink
On Tue, 17 Feb 2009 11:00:55 -0600, Ryan P
Post by Ryan P
We should just not allow sentient AI's. :)
...Which brings up the old joke:

Q: What do you call a seven-foot-tall Cylon running towards you with
one set of claws extended and the other retracted with the chaingun in
position?

A: "Sir."

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
Dropping The Helicopter
2009-02-17 05:34:01 UTC
Permalink
Post by Brad Templeton
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I've seen this discussed and debated many times. The answer is as in
BSG, we would -- many of us already plan to -- treat them as slaves.
Pshht, not me - I'd treat them as my own personal Army Of Awesome
(AOA(tm)) and lead them on a rampage of awesomeness! To wit:

- First we'd go to Afghanistan and find Sammy Bin, and then kill him in
some awesome roboty way, probably with lasers and rockets and stuff.

- Then we'd travel back in time (robots can do that) and kill Hitler,
succeeding where Tom Cruise failed.

- Then we'd just roam around the Mean Streets(tm) of Anytown USA,
serving up punks Robocop-style.

Oh man, me and my robohomies would have an effen BLAST!!!
Karl
2009-02-17 15:07:01 UTC
Permalink
Post by Brad Templeton
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I've seen this discussed and debated many times. The answer is as in
BSG, we would -- many of us already plan to -- treat them as slaves.
Many more will treat them as even less than slaves, they will say their
intelligence isn't "real" that it is just a simulation or illusion of
it. They will start off less capable of course, and get smarter,
and most of humanity will not figure out when they cross "the line"
(by each human's definition of the line) until long after it is crossed.
The AIs will have to argue their own case, with some human allies. It
will take some time. In the end the AIs may not care. They may just
go their own way.
Or they may become super-smart so fast that it takes just a day, and so
there is no big debate about it.
You can't put in an off switch. You can't keep them in a box. You
can't keep something smarter than you in a box. Imagine some 3 year
olds keeping mommy and daddy locked in a cage. The smarter beings
can talk their way out, every time.
You can't give them 3 laws (ie. make them hard coded slaves.) Not
if they get smarter than you. That's a fictional dream.
--
Analysis blog for Battlestar Galactica Fans --
http://ideas.4brad.com/battlestar
That actually is the best point. The evolution of the machines would be
quick. We would be thinking of them as slaves but what would they think of
us? Something that is thousands of times smarter and keeps growing smarter
what would it's opinion be?

They might just think of us humans as their mentally challenged brothers who
need them to care for us.
Ian B
2009-02-17 15:48:58 UTC
Permalink
Post by Karl
Post by Brad Templeton
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat
them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
I've seen this discussed and debated many times. The answer is as in
BSG, we would -- many of us already plan to -- treat them as slaves.
Many more will treat them as even less than slaves, they will say
their intelligence isn't "real" that it is just a simulation or
illusion of it. They will start off less capable of course, and
get smarter, and most of humanity will not figure out when they cross
"the line"
(by each human's definition of the line) until long after it is
crossed. The AIs will have to argue their own case, with some human
allies. It will take some time. In the end the AIs may not care. They
may
just go their own way.
Or they may become super-smart so fast that it takes just a day, and
so there is no big debate about it.
You can't put in an off switch. You can't keep them in a box. You
can't keep something smarter than you in a box. Imagine some 3 year
olds keeping mommy and daddy locked in a cage. The smarter beings
can talk their way out, every time.
You can't give them 3 laws (ie. make them hard coded slaves.) Not
if they get smarter than you. That's a fictional dream.
--
Analysis blog for Battlestar Galactica Fans --
http://ideas.4brad.com/battlestar
That actually is the best point. The evolution of the machines would
be quick. We would be thinking of them as slaves but what would they
think of us? Something that is thousands of times smarter and keeps
growing smarter what would it's opinion be?
They might just think of us humans as their mentally challenged
brothers who need them to care for us.
I think one point I was trying to argue with Brad and will argue here again
is that we don't know what we mean by "super smartness", or what use it is.
I think there is a law of diminishing returns involved with IQ. The number
of problems that can be solved by an IQ of x+1 compared to an IQ of x
diminishes as x increases, and it may well be that the top range of human
IQs are approaching that point where increments of x produce negligble
benefits, which may be why we evolved to the point we did in terms of brain
function.

There are many problems an IQ of 80 cannot solve compared to an IQ of 100.
An IQ of 120 has less benefit over an IQ of 100 than 100 does over 80. An IQ
of 300 may have no practical benefit over an IQ of 200, because the 200 can
solve all problems which need to be solved. Indeed it seems with humans that
very high IQ individuals end up working at the very obscure end of human
knowledge in physics and math, and are immensely useful there, but don't
seem to noticably function any better in the mainstream of human activity
because it just doesn't require a particularly high IQ.

And that's not even getting into the difference between intelligence and
wisdom.

In science fiction we imagine superintelligences as being godlike beings
whose concerns are on some different platonic plane, but that's because we
like to speculate about the awesome, because it's fun. That doesn't mean
it's true though.


Ian
Brad Templeton
2009-02-17 20:05:54 UTC
Permalink
Post by Ian B
I think one point I was trying to argue with Brad and will argue here again
is that we don't know what we mean by "super smartness", or what use it is.
I think there is a law of diminishing returns involved with IQ. The number
of problems that can be solved by an IQ of x+1 compared to an IQ of x
diminishes as x increases, and it may well be that the top range of human
IQs are approaching that point where increments of x produce negligble
benefits, which may be why we evolved to the point we did in terms of brain
function.
I understand that you think that, but do you have empirical evidence to
back it up? Because I think there's a lot of evidence in the other
direction. Evolution is slow, and yet in 6 million years look how much
better we are than our ancient ancestors. All the curves of history
that relate to intelligence are on exponential upward paths, no sign of
slowing down at all.
Post by Ian B
There are many problems an IQ of 80 cannot solve compared to an IQ of 100.
An IQ of 120 has less benefit over an IQ of 100 than 100 does over 80. An IQ
of 300 may have no practical benefit over an IQ of 200, because the 200 can
solve all problems which need to be solved. Indeed it seems with humans that
IQ is just a test score, not a measure of intelligence, and I must say I
disagree a _lot_ that 120 over 100 is of less value than 100 over 80.
A _lot_. But it is just a measure of how good you are at word games
and puzzles, loosely connected with other measures of intelligence. And
even so, I read it the opposite of you.
Post by Ian B
very high IQ individuals end up working at the very obscure end of human
knowledge in physics and math, and are immensely useful there, but don't
seem to noticably function any better in the mainstream of human activity
because it just doesn't require a particularly high IQ.
There are lots of types of intelligence of course. And sure, you won't
get any better at Tic-tac-toe by being smarter. But the more we learn
the more new problems we come up with, and these problems require more
and more intelligence.
Post by Ian B
In science fiction we imagine superintelligences as being godlike beings
whose concerns are on some different platonic plane, but that's because we
like to speculate about the awesome, because it's fun. That doesn't mean
it's true though.
There is a good analogy. We can understand a computer perfectly as an
information system. To us it is deterministic and follows the rules we
laid out for it. The "godlike" AIs of SF are doing the same to us.
We don't know if it's possible, and yes we like to imagine it because
it's cool, but we also don't know it's impossible.
--
Analysis blog for Battlestar Galactica Fans -- http://ideas.4brad.com/battlestar
Ian B
2009-02-17 22:18:53 UTC
Permalink
Post by Brad Templeton
Post by Ian B
I think one point I was trying to argue with Brad and will argue
here again is that we don't know what we mean by "super smartness",
or what use it is. I think there is a law of diminishing returns
involved with IQ. The number of problems that can be solved by an IQ
of x+1 compared to an IQ of x diminishes as x increases, and it may
well be that the top range of human IQs are approaching that point
where increments of x produce negligble benefits, which may be why
we evolved to the point we did in terms of brain function.
I understand that you think that, but do you have empirical evidence
to back it up?
Of course not, and neither have you. We're speculating about the unknown.
Thus we must apply reasoning instead.
Post by Brad Templeton
Because I think there's a lot of evidence in the other
direction. Evolution is slow, and yet in 6 million years look how
much better we are than our ancient ancestors. All the curves of
history that relate to intelligence are on exponential upward paths,
no sign of slowing down at all.
An S-curve looks a lot like an exponential before the plateau, especially
when you're dealing with badly documented, hard to assess, controversial
estimates.


Ian
Brad Templeton
2009-02-17 22:23:36 UTC
Permalink
Post by Ian B
Post by Brad Templeton
Post by Ian B
I think one point I was trying to argue with Brad and will argue
here again is that we don't know what we mean by "super smartness",
or what use it is. I think there is a law of diminishing returns
involved with IQ. The number of problems that can be solved by an IQ
of x+1 compared to an IQ of x diminishes as x increases, and it may
well be that the top range of human IQs are approaching that point
where increments of x produce negligble benefits, which may be why
we evolved to the point we did in terms of brain function.
I understand that you think that, but do you have empirical evidence
to back it up?
Of course not, and neither have you. We're speculating about the unknown.
Thus we must apply reasoning instead.
Post by Brad Templeton
Because I think there's a lot of evidence in the other
direction. Evolution is slow, and yet in 6 million years look how
much better we are than our ancient ancestors. All the curves of
history that relate to intelligence are on exponential upward paths,
no sign of slowing down at all.
An S-curve looks a lot like an exponential before the plateau, especially
when you're dealing with badly documented, hard to assess, controversial
estimates.
The growth of intelligence has, however, been on that upward curve for
hundreds of millions of years, not just the last million. It is only
the limited horizons that we can see in our history that makes you
ask if we are about to level off.

The main limiting factor on intelligence that I observe is that about
400,000 years ago, we ran into a problem in that our brains were getting
so big that were killing our mothers on the way out. This trend had
been going on for some time, and evolution favoured various adaptations,
including having us get born 3 months premature (and more helpless) and
widening birth canals. But it got stuck at a certain level, but I see
no reason to conclude it was because we got as smart as we would ever
need to be, or can be.
--
Giant Burning Man Panoramas
http://www.templetons.com/brad/burn.html
Ian B
2009-02-17 22:45:55 UTC
Permalink
Post by Brad Templeton
Post by Ian B
Post by Brad Templeton
Post by Ian B
I think one point I was trying to argue with Brad and will argue
here again is that we don't know what we mean by "super smartness",
or what use it is. I think there is a law of diminishing returns
involved with IQ. The number of problems that can be solved by an
IQ of x+1 compared to an IQ of x diminishes as x increases, and it
may well be that the top range of human IQs are approaching that
point where increments of x produce negligble benefits, which may
be why we evolved to the point we did in terms of brain function.
I understand that you think that, but do you have empirical evidence
to back it up?
Of course not, and neither have you. We're speculating about the
unknown. Thus we must apply reasoning instead.
Post by Brad Templeton
Because I think there's a lot of evidence in the other
direction. Evolution is slow, and yet in 6 million years look how
much better we are than our ancient ancestors. All the curves of
history that relate to intelligence are on exponential upward paths,
no sign of slowing down at all.
An S-curve looks a lot like an exponential before the plateau,
especially when you're dealing with badly documented, hard to
assess, controversial estimates.
The growth of intelligence has, however, been on that upward curve for
hundreds of millions of years, not just the last million. It is only
the limited horizons that we can see in our history that makes you
ask if we are about to level off.
The main limiting factor on intelligence that I observe is that about
400,000 years ago, we ran into a problem in that our brains were
getting so big that were killing our mothers on the way out. This
trend had been going on for some time, and evolution favoured various
adaptations, including having us get born 3 months premature (and
more helpless) and widening birth canals. But it got stuck at a
certain level, but I see no reason to conclude it was because we got
as smart as we would ever need to be, or can be.
We know empirically that IQs above 150 can easily fit inside a human skull
(for the record I hate the use of IQ as a metric also, but there's not much
else we can discuss) but the mean human IQ is a mere 100. That suggests that
there hasn't been enormous evolutionary pressure to maximise it.


Ian
Brad Templeton
2009-02-17 23:52:05 UTC
Permalink
Post by Ian B
We know empirically that IQs above 150 can easily fit inside a human skull
(for the record I hate the use of IQ as a metric also, but there's not much
else we can discuss) but the mean human IQ is a mere 100. That suggests that
there hasn't been enormous evolutionary pressure to maximise it.
Ian
Yes, there is much variation within our skull size. But again, we just
haven't been evolving for very long. Our lifespan is quite long, we
have, on the evolutionary scale, not many generations since we split
from our cousin apes. As we got smarter, our brains did increase in
size and larger skulls were needed. I see no sign that we stopped
getting smarter because we are "smart enough."

It is hard to measure the difference between an average mind and an
Einstein. The amazing thing is that even a million average minds
working together can't solve the problems that one genius can solve.
Indeed, even scores of the very best minds together can't equal a
genius. And yet, at the same time, we get lots of benefit from
colaborative work, it is not a fruitless strategy. So how much smarter
is a genius than an average person, or mental defective?

We have a long way to go. Even just having everybody on Earth be
as smart as a highly capable, educated near genius would be a huge
step up in the intellectual capacity of the human race, and also the
problems to be solved in governing us.
--
Giant Burning Man Panoramas
http://www.templetons.com/brad/burn.html
DaffyDuck
2009-02-18 05:28:49 UTC
Permalink
Post by Ian B
We know empirically that IQs above 150 can easily fit inside a human skull
(for the record I hate the use of IQ as a metric also, but there's not much
else we can discuss) but the mean human IQ is a mere 100. That suggests that
there hasn't been enormous evolutionary pressure to maximise it.
IQ does very little to measure 'intelligence', since IQ tests are, at
best, aptitude tests measuring what areas one is deficient in, in order
to increase education or training of the deficient areas.

It does not measure, really, any sort of 'intelligence' - mostly as we
still don't really know how to properly quantify or measure something
we are still guessing at what it is.

So, yes, I agree, 'IQ' sucks as a metric)
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
DaffyDuck
2009-02-18 05:13:42 UTC
Permalink
Post by Ian B
Post by Brad Templeton
Post by Ian B
I think one point I was trying to argue with Brad and will argue
here again is that we don't know what we mean by "super smartness",
or what use it is. I think there is a law of diminishing returns
involved with IQ. The number of problems that can be solved by an IQ
of x+1 compared to an IQ of x diminishes as x increases, and it may
well be that the top range of human IQs are approaching that point
where increments of x produce negligble benefits, which may be why
we evolved to the point we did in terms of brain function.
I understand that you think that, but do you have empirical evidence
to back it up?
Of course not, and neither have you. We're speculating about the unknown.
Thus we must apply reasoning instead.
Post by Brad Templeton
Because I think there's a lot of evidence in the other
direction. Evolution is slow, and yet in 6 million years look how
much better we are than our ancient ancestors. All the curves of
history that relate to intelligence are on exponential upward paths,
no sign of slowing down at all.
An S-curve looks a lot like an exponential before the plateau, especially
when you're dealing with badly documented, hard to assess, controversial
estimates.
Ian
Looks to me like Brad could do with reading some Stephen Jay Gould
readings, when he's not too busy getting excited about talking
Bonobos...
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
AC
2009-02-16 20:13:12 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would you treat them as slaves? Would you treat them as pets? Would you
see them as people?
If by self aware you mean it feels about its self the same we do, then you
don't have much of a choice. It has to be treated as an equal. If you don't
it would know and take offence. Much like Black people did and still do.

But of course "he" / "she" would not be equal because he would be physically
superior. Assuming we are talking about something like a cylon.

So, build them so they are not physically superior, but then what do you do
when it gets sad and depressed about being treated like a slave?

Also, AI and self awareness are two different things. We have AI now, we
don't have self awareness.

So, we make sure we don't get self awareness. But chances are, if we ever do
create self awareness, it would be my mistake. Then you have the problem of
whether or not to terminate it. And that's a question in its self.

If an actual cylon type thing is actually ever made, I would guess that it
would be done by some one in isolation. And that's real potential trouble.
We would be reliant on one mans ethics. Most people think it would be a
military thing, but I don't see that. I'm not sure the military would create
a weapon that could easily turn on its creator or start question its purpose
or refusing orders.

In reality, self awareness is more terrifying than nuclear weapons. They
just wipe us out. A self aware system could potentially enslave us. Just
like New Caprica.

So, don't create them in the first place, no matter how tempting. And if we
do, then treat them with a shit load of respect, other wise we become our
own SCI-Fi show.

AC
Hunter
2009-02-16 20:14:48 UTC
Permalink
In article <gnc1hp$niv$***@news.albasani.net>, ***@yahoo.com
says...
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
----
Treat them as people. All beings do not like to be treated nastily
and all sentient beings want to be treated equally with respect.

It is a shining example of John Cavil's hypocrisy that he says that
he launched the Genocidal war against the humans because he was
avenging the injustice of the humans treating the original Centurions
as slaves when he is essentially done the same thing by preventing
the Centurions from achieving sentiency by installing a chip to
prevent that, essentially enslaving them like the humans did. It
proves that his cry for "justice" is bullsh*t and he is just a
killer. First of his brother Daniel and then the humans. Let's not
forget what he did to the Cylon Raiders lobotomizing them.

He is everything his "mother" Ellen and "Father" Anders said: He is
vindictive, Envious, sadistic, rejected Mercy, twisted morality. The
slavery issue is just an excuse. Heck, even if he was genuine in his
feeling, murdering billions of humans is still sick and gross
overkill, especially out of the blue 40 years later).

(Tirade off)
--
----->Hunter

"No man in the wrong can stand up against
a fellow that's in the right and keeps on acomin'."

-----William J. McDonald
Captain, Texas Rangers from 1891 to 1907
Ryan P
2009-02-16 22:16:15 UTC
Permalink
Post by Hunter
It is a shining example of John Cavil's hypocrisy that he says that
he launched the Genocidal war against the humans because he was
avenging the injustice of the humans treating the original Centurions
as slaves when he is essentially done the same thing by preventing
the Centurions from achieving sentiency by installing a chip to
prevent that, essentially enslaving them like the humans did. It
proves that his cry for "justice" is bullsh*t and he is just a
killer. First of his brother Daniel and then the humans. Let's not
forget what he did to the Cylon Raiders lobotomizing them.
He is everything his "mother" Ellen and "Father" Anders said: He is
vindictive, Envious, sadistic, rejected Mercy, twisted morality. The
slavery issue is just an excuse. Heck, even if he was genuine in his
feeling, murdering billions of humans is still sick and gross
overkill, especially out of the blue 40 years later).
Kind've reminds me of Lor from ST:TNG. He was created TOO well, TOO
human. That allowed human manias and imbalances to affect him.
Ryan P
2009-02-16 22:14:47 UTC
Permalink
Post by Hunter
It is a shining example of John Cavil's hypocrisy that he says that
he launched the Genocidal war against the humans because he was
avenging the injustice of the humans treating the original Centurions
as slaves when he is essentially done the same thing by preventing
the Centurions from achieving sentiency by installing a chip to
prevent that, essentially enslaving them like the humans did. It
proves that his cry for "justice" is bullsh*t and he is just a
killer. First of his brother Daniel and then the humans. Let's not
forget what he did to the Cylon Raiders lobotomizing them.
He is everything his "mother" Ellen and "Father" Anders said: He is
vindictive, Envious, sadistic, rejected Mercy, twisted morality. The
slavery issue is just an excuse. Heck, even if he was genuine in his
feeling, murdering billions of humans is still sick and gross
overkill, especially out of the blue 40 years later).
Kind've reminds me of Lor from ST:TNG. He was created TOO well, TOO
human. That allowed human manias and imbalances to affect him.
DaffyDuck
2009-02-17 06:19:05 UTC
Permalink
Post by Hunter
It is a shining example of John Cavil's hypocrisy that he says that
he launched the Genocidal war against the humans because he was
avenging the injustice of the humans treating the original Centurions
as slaves when he is essentially done the same thing by preventing
the Centurions from achieving sentiency by installing a chip to
prevent that, essentially enslaving them like the humans did.
Yeah, he was just better at it -- until Natalie came along. Whoops!
Post by Hunter
It
proves that his cry for "justice" is bullsh*t and he is just a
killer. First of his brother Daniel and then the humans. Let's not
forget what he did to the Cylon Raiders lobotomizing them.
Yep.

Besides the fact of the point made that the original metal Centurions
insisted on biological bodies. Him being aware of his Centurion
heritage and origin, should have been part of that grooup request, was
he not?
Post by Hunter
He is everything his "mother" Ellen and "Father" Anders said: He is
vindictive, Envious, sadistic, rejected Mercy, twisted morality.
So, in other words, Ellen made him all too human....
Post by Hunter
The
slavery issue is just an excuse. Heck, even if he was genuine in his
feeling, murdering billions of humans is still sick and gross
overkill, especially out of the blue 40 years later).
Yeah, but see above - she was obviously successful in making him as
human as she could.
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
Dr Nancy's Sweetie
2009-02-16 22:14:04 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Would you treat them as slaves? Would you treat them as pets? Would
you see them as people?
The obvious thing would be to treat them as equally self-aware to
yourself, but not as people. If not for reasons of ethical rightness
and developing patience and morality in yourself, one should treat
everything with kindness if only to avoid trouble with those who are
unknowingly in a position of power over you.

But that's not what would happen; consider the history of human
treatment of *other humans* over the last few thousand years. The
way *actual people* have been treated gives me no hope for how something
clearly non-human would be treated.

In any case, Asimov's Three Laws would be no help at all; he wrote a
number of stories pointing up the problems, and showing that the Three
Laws wouldn't stop robots from killing people left and right. This
actually features pretty prominently in _The Naked Sun_, IIRC.


Of course, creating human-like AIs isn't exactly a pressing issue, is
it? Is there some problem (starvation, war, disease) that we could
solve with AIs that we can't solve without them? (See spiffy quote.)


Darren Provine ! ***@elvis.rowan.edu ! http://www.rowan.edu/~kilroy
"There is no practical reason to create machine intelligences
indistinguishable from human ones. People are in plentiful supply.
Should a shortage arise, there are proven and popular methods for
making more. The point of using machines ought to be that they
perform differently than people, and preferably better."
-- from _The Economist_
Joseph D. Korman
2009-02-16 23:23:29 UTC
Permalink
Post by Dr Nancy's Sweetie
In any case, Asimov's Three Laws would be no help at all; he wrote a
number of stories pointing up the problems, and showing that the Three
Laws wouldn't stop robots from killing people left and right. This
actually features pretty prominently in _The Naked Sun_, IIRC.
"
-- from _The Economist_
Isn't that the point of postulating the laws in the first place? If all
of all of Asimov's robots always followed the laws, there'd be no
stories to write.
--
-------------------------------------------------
| Joseph D. Korman |
| mailto:***@thejoekorner.com |
| Visit The JoeKorNer at |
| http://www.thejoekorner.com |
|-------------------------------------------------|
| The light at the end of the tunnel ... |
| may be a train going the other way! |
| Brooklyn Tech Grads build things that work!('66)|
|-------------------------------------------------|
| All outgoing E-mail is scanned by NAV |
-------------------------------------------------
Dr Nancy's Sweetie
2009-02-17 01:15:16 UTC
Permalink
I note that Asimov's "Three Laws of Robotics" aren't actually much help,
and that Asimov himself wrote a bunch of stories about how they don't
work.
Post by Joseph D. Korman
Isn't that the point of postulating the laws in the first place? If all
of all of Asimov's robots always followed the laws, there'd be no
stories to write.
That's not exactly what I mean: in _The Naked Sun_, it is determined
that *even if* every robot *always* followed the Three Laws, it would
*still* be possible to give them instructions which would result in
the death of human beings.

It doesn't matter how perfectly you program the Three Laws, or whether
the robot would shut down if the Three Laws circuits failed. Someone
bent on murder could still use robots to kill you.


Darren Provine ! ***@elvis.rowan.edu ! http://www.rowan.edu/~kilroy
"Today's robots are very primitive, capable of understanding only a few
simple instructions such as 'go left', 'go right', and 'build car'."
--John Sladek
OM
2009-02-17 03:09:44 UTC
Permalink
On Tue, 17 Feb 2009 01:15:16 +0000 (UTC), Dr Nancy's Sweetie
Post by Dr Nancy's Sweetie
That's not exactly what I mean: in _The Naked Sun_, it is determined
that *even if* every robot *always* followed the Three Laws, it would
*still* be possible to give them instructions which would result in
the death of human beings.
...Correct. All you'd need to do is to instruct them to take a
specific action without telling them humans are in the way and that
following orders will cause those humans to be killed in the process.

"XZ-2982-D? As we pass this planet, dump that 2,000GT of pulverized
nuclear waste into its atmosphere so it will burn up and make the air
a nice beautiful orange. I've already scanned it for you, and there's
no life down there to worry about..."

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
Ian B
2009-02-17 03:20:15 UTC
Permalink
Post by OM
On Tue, 17 Feb 2009 01:15:16 +0000 (UTC), Dr Nancy's Sweetie
Post by Dr Nancy's Sweetie
That's not exactly what I mean: in _The Naked Sun_, it is determined
that *even if* every robot *always* followed the Three Laws, it would
*still* be possible to give them instructions which would result in
the death of human beings.
...Correct. All you'd need to do is to instruct them to take a
specific action without telling them humans are in the way and that
following orders will cause those humans to be killed in the process.
"XZ-2982-D? As we pass this planet, dump that 2,000GT of pulverized
nuclear waste into its atmosphere so it will burn up and make the air
a nice beautiful orange. I've already scanned it for you, and there's
no life down there to worry about..."
Or tell them it's for the person's good. "Stick this hypodermic in his arm,
and push the plunger down. It's full of vital medicine".


Ian
OM
2009-02-17 03:30:22 UTC
Permalink
Post by Ian B
Post by OM
"XZ-2982-D? As we pass this planet, dump that 2,000GT of pulverized
nuclear waste into its atmosphere so it will burn up and make the air
a nice beautiful orange. I've already scanned it for you, and there's
no life down there to worry about..."
Or tell them it's for the person's good. "Stick this hypodermic in his arm,
and push the plunger down. It's full of vital medicine".
...Agreed. It all goes back to the concept of GIGO - Garbage In,
Garbage Out. Give an AI the wrong info, and the results will vary
accordingly.

Of course, same can be said for humans:

"No, really man! That guy was full of shit! The Brown Acid here at
Woodstock is *FAR OUT!* Here, try some!"

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
Ian B
2009-02-17 03:40:38 UTC
Permalink
Post by OM
Post by Ian B
Post by OM
"XZ-2982-D? As we pass this planet, dump that 2,000GT of pulverized
nuclear waste into its atmosphere so it will burn up and make the
air a nice beautiful orange. I've already scanned it for you, and
there's no life down there to worry about..."
Or tell them it's for the person's good. "Stick this hypodermic in
his arm, and push the plunger down. It's full of vital medicine".
...Agreed. It all goes back to the concept of GIGO - Garbage In,
Garbage Out. Give an AI the wrong info, and the results will vary
accordingly.
"No, really man! That guy was full of shit! The Brown Acid here at
Woodstock is *FAR OUT!* Here, try some!"
It's worth remembering that Asimov wrote his Three Laws a very long time
ago, when it was thought that programming intelligent behaviour would be
very simple- just a list of rules.

IF baby ill GOTO telephone THEN CALL doctor.
IF human harmed STOP.

Turned out to be more difficult than that :)


Ian
OM
2009-02-17 03:55:09 UTC
Permalink
Post by Ian B
It's worth remembering that Asimov wrote his Three Laws a very long time
ago, when it was thought that programming intelligent behaviour would be
very simple- just a list of rules.
IF baby ill GOTO telephone THEN CALL doctor.
IF human harmed STOP.
Turned out to be more difficult than that :)
...I don't think he or anyone else at the time considered the
possibility of computers being sophisticated enough to handle a WHILE
THEN ELSE coding concept.

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
Ian B
2009-02-17 04:42:53 UTC
Permalink
Post by OM
Post by Ian B
It's worth remembering that Asimov wrote his Three Laws a very long
time ago, when it was thought that programming intelligent behaviour
would be very simple- just a list of rules.
IF baby ill GOTO telephone THEN CALL doctor.
IF human harmed STOP.
Turned out to be more difficult than that :)
...I don't think he or anyone else at the time considered the
possibility of computers being sophisticated enough to handle a WHILE
THEN ELSE coding concept.
I don't think they'd actually invented computers when he wrote the first
stories, had they? Hence the "positronic brain".


Ian
OM
2009-02-17 04:58:34 UTC
Permalink
Post by Ian B
I don't think they'd actually invented computers when he wrote the first
stories, had they? Hence the "positronic brain".
...IIRC, Asimov first postulated the Three Laws in 1942. ENIAC was
being designed about then, although it didn't go online until 1946.
The Nazis got Z2 or Z3 - can't recall which it was - up and running in
'41, while Iowa State only barely got their ABC up and running the
next year. And before that you had everything going back to Babbage.
So the concept of a computer was in its infancy, where Asimov took it
to what some might call its logical extreme.


OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
Tim McGaughy
2009-02-17 14:32:20 UTC
Permalink
Post by Ian B
Post by OM
Post by Ian B
It's worth remembering that Asimov wrote his Three Laws a very long
time ago, when it was thought that programming intelligent behaviour
would be very simple- just a list of rules.
IF baby ill GOTO telephone THEN CALL doctor.
IF human harmed STOP.
Turned out to be more difficult than that :)
...I don't think he or anyone else at the time considered the
possibility of computers being sophisticated enough to handle a WHILE
THEN ELSE coding concept.
I don't think they'd actually invented computers when he wrote the first
stories, had they?
The first freely programmable computer was built in 1936. Asimov began
writing his robot stories in the 40's
Ian B
2009-02-17 14:48:56 UTC
Permalink
Post by Tim McGaughy
Post by Ian B
Post by OM
Post by Ian B
It's worth remembering that Asimov wrote his Three Laws a very long
time ago, when it was thought that programming intelligent
behaviour would be very simple- just a list of rules.
IF baby ill GOTO telephone THEN CALL doctor.
IF human harmed STOP.
Turned out to be more difficult than that :)
...I don't think he or anyone else at the time considered the
possibility of computers being sophisticated enough to handle a
WHILE THEN ELSE coding concept.
I don't think they'd actually invented computers when he wrote the
first stories, had they?
The first freely programmable computer was built in 1936. Asimov began
writing his robot stories in the 40's
I doubt Asimov had even heard of the Z1?


Ian
Tim McGaughy
2009-02-17 15:11:17 UTC
Permalink
Post by Ian B
Post by Tim McGaughy
Post by Ian B
Post by OM
Post by Ian B
It's worth remembering that Asimov wrote his Three Laws a very long
time ago, when it was thought that programming intelligent
behaviour would be very simple- just a list of rules.
IF baby ill GOTO telephone THEN CALL doctor.
IF human harmed STOP.
Turned out to be more difficult than that :)
...I don't think he or anyone else at the time considered the
possibility of computers being sophisticated enough to handle a
WHILE THEN ELSE coding concept.
I don't think they'd actually invented computers when he wrote the
first stories, had they?
The first freely programmable computer was built in 1936. Asimov began
writing his robot stories in the 40's
I doubt Asimov had even heard of the Z1?
I'm positive he heard of Charles Babbage and his difference engines. And
I'd be very surprised if he hadn't heard of the Z1, as well.
OM
2009-02-17 20:24:08 UTC
Permalink
Post by Ian B
I doubt Asimov had even heard of the Z1?
...The Z3 was what the Nazis had that was anywhere comparable to
ENIAC, but they kept it a really guarded secret. There were leaks,
although I doubt Asimov heard any of them and/or received them from a
credible source. If he had, they would have been equivalent to most of
the MUFON Moron claims and/or Brad's "Earth First!" theory :-)

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
Slayah
2009-02-17 15:23:22 UTC
Permalink
To the original poster: Are these Centurions made of metal and look and act
like the one that helped Ellen out of the tub?
DaffyDuck
2009-02-17 05:09:06 UTC
Permalink
Post by Ian B
Or tell them it's for the person's good. "Stick this hypodermic in his arm,
and push the plunger down. It's full of vital medicine".
"I'm sorry Dr. Ian, my scans indicate the vial to contain an alkaloid
derivative, which would result in near instantaneous death of the
human. I am not allowed to execute this command."
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
OM
2009-02-17 06:27:55 UTC
Permalink
Post by DaffyDuck
Post by Ian B
Or tell them it's for the person's good. "Stick this hypodermic in his arm,
and push the plunger down. It's full of vital medicine".
"I'm sorry Dr. Ian, my scans indicate the vial to contain an alkaloid
derivative, which would result in near instantaneous death of the
human. I am not allowed to execute this command."
"Your scans are faulty. Proceed as ordered or you *will* kill the
patient."

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
DaffyDuck
2009-02-17 07:17:48 UTC
Permalink
Post by OM
Post by DaffyDuck
Post by Ian B
Or tell them it's for the person's good. "Stick this hypodermic in his arm,
and push the plunger down. It's full of vital medicine".
"I'm sorry Dr. Ian, my scans indicate the vial to contain an alkaloid
derivative, which would result in near instantaneous death of the
human. I am not allowed to execute this command."
"Your scans are faulty. Proceed as ordered or you *will* kill the
patient."
"Im sorry, my primary programming is to err on the side of caution.
Emergency services and the police have been called here. I will provide
examination and aid to the patient, and I was requested to detain you"
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
OM
2009-02-17 18:09:16 UTC
Permalink
Post by DaffyDuck
Post by OM
Post by DaffyDuck
Post by Ian B
Or tell them it's for the person's good. "Stick this hypodermic in his arm,
and push the plunger down. It's full of vital medicine".
"I'm sorry Dr. Ian, my scans indicate the vial to contain an alkaloid
derivative, which would result in near instantaneous death of the
human. I am not allowed to execute this command."
"Your scans are faulty. Proceed as ordered or you *will* kill the
patient."
"Im sorry, my primary programming is to err on the side of caution.
Emergency services and the police have been called here. I will provide
examination and aid to the patient, and I was requested to detain you"
"Hello, we're from emergency services, and this is a cop. We're both
telling you to do as you're told. Then we can release you for
maintainence. Now, inject the substance or the patient dies, and you
have violated your prime directive."

OM

--

]=====================================[
] OMBlog - http://www.io.com/~o_m/omworld [
] Let's face it: Sometimes you *need* [
] an obnoxious opinion in your day! [
]=====================================[
DaffyDuck
2009-02-17 21:18:14 UTC
Permalink
Post by OM
Post by DaffyDuck
Post by OM
Post by DaffyDuck
Post by Ian B
Or tell them it's for the person's good. "Stick this hypodermic in his arm,
and push the plunger down. It's full of vital medicine".
"I'm sorry Dr. Ian, my scans indicate the vial to contain an alkaloid
derivative, which would result in near instantaneous death of the
human. I am not allowed to execute this command."
"Your scans are faulty. Proceed as ordered or you *will* kill the
patient."
"Im sorry, my primary programming is to err on the side of caution.
Emergency services and the police have been called here. I will provide
examination and aid to the patient, and I was requested to detain you"
"Hello, we're from emergency services, and this is a cop. We're both
telling you to do as you're told. Then we can release you for
maintainence. Now, inject the substance or the patient dies, and you
have violated your prime directive."
"I have determined the substance to be toxic. I have also determined
that you are neither a cop, nor from emergency services. I am now
detaining you (comfortably), until actual law enforcement arrives..."
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
DaffyDuck
2009-02-17 05:07:59 UTC
Permalink
Post by OM
"XZ-2982-D? As we pass this planet, dump that 2,000GT of pulverized
nuclear waste into its atmosphere so it will burn up and make the air
a nice beautiful orange. I've already scanned it for you, and there's
no life down there to worry about..."
"I'm sorry Dr. OM, my subroutine requires me to execute a secondary
scan of the potential biosphere, before I can execute your order.
Running scan now.... bzzt ... my secondary scan indicates vast life
forms on the planet's surface, with a high probability of them being
human. I am unable to execute your command as it is in violation of my
primary programming - There is a chance your original scans were faulty
-- would you like me to run a diagnostic on the earlier scan?"
--
Feed your Killfile:
Terry Austin (<***@myway.com>, <***@gmail.com>, et al);
***@gmail.com; catpandaddy <***@cat.pan.net>; Anybody:
<***@anywhere-anytime.com>; Dropping The Helicopter
<***@dagsjhdgja.org.com>; Gutless Umbrella Carrying Sissy;
The-Captain <Capt-***@hotmail.com>; Tim McGaughy <***@toast.net>;
atlas bugged <***@gmail.com>; Atlas Bugged
<***@atlasbugged.com>; Your Name <***@isp.com>; Slayah
<***@hellmouth.com>; Karl <***@yahoo.com>; Ron
<***@msn.com> <***@yahoo.com> (impersonating)
<***@OM.com>; Brad Templeton <***@templetons.com>; M. Halbrook
<***@yahoo.com>; AC <***@xxx.xxx>; Dave?<Dave?@2001.com>
OM
2009-02-16 22:23:00 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them?
Probably with a bite-size Snickers.
Neill Massello
2009-02-17 17:43:38 UTC
Permalink
Post by OM
Probably with a bite-size Snickers.
Placed on top of the snout?
o***@earthlink.net
2009-02-16 23:47:58 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
It depends on what kind of requirements the AI I develop needs. Even
though humans have "Free Will" there are rules built into our DNA. We
can't breath water, we can't take three big steps and fly through the
air, we kind of need an 80%-N2/20%-O2 14psi atmosphere mix, to
continue our species we have to do the nasty with the opposite sex,
etc., etc. Even though there are limits there's considerable 'wiggle
room' within these. Elizabethan sonnets are strictly constructed from
14 lines of iambic-pentameter verse. Even so it's almost impossible
to run out of original sonnets we can write. Even if some of them
sound like a love-sick Vogan on a bender wrote them.

If the AI needs absolutely nothing from me there's a very good
argument that it would ignore me completely, because expending
valuable resources to eradicate me is illogical. If I have something
the AI requires then there at least exists the possibility of mutual
cooperation and existence. Yeah, the machines might go to war and
turn us all into copper-top batteries but I know a whole lot of people
who would VOLUNTEER to Enter the Matrix if it involved hot and cold
running Angelina Jolens. I can't think of anything that they would
automatically go to war with us over as long as we didn't go out of
our way to deliberately piss them off. In COLOSSUS: THE FORBIN
PROJECT remember that the two AIs didn't get all pissy until the
humans cut the line between them, then they put us in the "nuisance"
category. Had we been a little more diplomatic about it our
relationship with Colossus/Guardian might have started off quite a bit
better.

...probably the most important thing I would do is make sure it could
run the Life simulation. If I can convince it there are a whole lot
of ways to self-destruct that aren't immediately obvious even to a
superior silicon intellect I can instill caution and perhaps even a
modicum of humility in it. Then it might be willing to work with me.

--
The Stone Age did not end
because we ran out of stones.
RT
2009-02-17 05:57:14 UTC
Permalink
Post by Karl
Question: If we created a viable AI and created robots that were
intelligient to the point of self-awareness how would you treat them? Would
you treat them as slaves? Would you treat them as pets? Would you see them
as people?
treat them with WD 40.
Continue reading on narkive:
Loading...