news indexed by topic - cognitive science archive
news indexed by topic - cognitive science archive
-
cognitive science -
general
index by topic to ai in the news
the topics
ai in the news
ai overview
agents
applications
cognitive science
education
ethical/social
expert systems
faqs
games & puzzles
history
interfaces
machine learning
natural language
philosophy
reasoning
reference shelf
representation
resources
robots
science fiction
speech
systems & languages
turing test
vision
what's left?
quick start tips
ai overview
a - z index
ai in the news
aaai video archive
doing a report for school
site map
reference shelf
how to use this site
search engine
directory
how to use this site
a - z index
site map
reference shelf
search engine
contact ai topics
notices
disclosures
copyright
ai topics home
aaai home
aaai video archive
xml/rss news feed
ai topics
home
aaai
home
aaai
video
archive
search
engine
general index by topic
to ai in the news
main menu
related
pages
current
news
archive
of previously featured ai in the news articles
news
column in ai magazine
ai
newstoons
more
news sources & collections including
ai-generated
collections
news
faq
ai
alert
search
engine
copyright:
please respect the rights of the copyright holders.
notices
& disclaimers: just because
we mention something on this page, you
should not infer that...
september 26, 2004: crick's
other goal - unlocking riddle of the mind. scientists continuing
study of consciousness. by bruce lieberman.
san diego union-tribune & signonsandiego.com. "francis crick focused on looking for an area of the brain that might be critical to human consciousness. as a young scientist in 1940s england, francis crick decided to devote his life to unraveling two mysteries: the foundation for all living things and how the brain gives rise to the mind. ... tomorrow, when the salk institute in la jolla hosts a public memorial for crick, who died july 28 at 88, that unfinished business will most certainly be talked about. how billions of brain cells interpret sensations, draw on memory and association to make sense of them, and create conscious thoughts about the world is unknown. 'it's inconceivable to us, but somehow it happens,' said terry sejnowski, a computational neurobiologist at the salk institute who studies how computers can be used to understand the brain. 'consciousness is elusive,' he said. 'it's hard to pin down.' ... illuminating how the brain creates consciousness would profoundly change the way humans view themselves, scientists say. ... engineers could build machines that truly think, bringing artificial intelligence out of science fiction and into the real world. ... [c]onsciousness is really about how all the parts come together to create the thinking mind. 'being reductionist is a good way to start, but at some point you have to . . . put together the pieces and see how they work together,' sejnowski said. he calls the effort to assemble the big picture of consciousness 'the humpty dumpty project.'"
>>> cognitive science, philosophy, neural networks & connectionist systems, events (@ resources for students), machine learning
september 21, 2004: a
little stroke of genius. a one-day symposium explores the link
between neuroscience and music. by arminta wallace.the irish times
(subscription req'd.). "another speaker at the symposium will
be prof paul robertson, a musician who has acquired a considerable
amount of expertise on the medical front. for many years as leader
of the medici string quartet, he developed an interest in neuropsychiatry
and presented a series called music and the mind for channel 4.
'it was a fantastic opportunity to talk to people who were doing
fascinating work - the most fascinating aspect of which was that
none of them knew about each other,' he says. 'so, for example,
there are people doing work with brain-damaged children using music
therapy but there's very little contact between them and the people
doing brain mapping. and there's virtually no connection between
either of those groups and the people doing artificial intelligence
in computing. but it doesn't take a mastermind to see that a huge
cross-fertilisation is possible in those areas.'"
>>> cognitive science,
music
september 16, 2004: children
create new sign language. by julianna kettlewell. bbc news.
" a new sign language created over the last 30 years by deaf
children in nicaragua has given experts a unique insight into how
languages evolve. the language follows many basic rules common to
all tongues, even though the children were not taught them. it indicates
some language traits are not passed on by culture, but instead arise
due to the innate way human beings process language, experts claim.
the us-led research is detailed in the latest issue of science magazine.
... [c]hildren instinctively break information down into small chunks
so they can have the flexibility to string them back together, to
form sentences with a range of meanings. interestingly, adults lose
this talent, which also suggests there is an innate element to the
language learning process."
>>> cognitive science,
natural language processing,
representation
september 8 / 15, 2004:
automatic
icons organize files. by kimberly patch. technology research
news. "researchers from the university of southern california,
the massachusetts institute of technology, and esc entertainment
are aiming to improve the lost-in-cyberspace problem with a tool
designed to tap people's facility with pictures. the system, dubbed
visualid, automatically generates detailed icons for specific files.
it assigns similar icons to related files by mutating the original
icon in a series. the degree of mutation depends on the degree of
similarity of the file names, which gives the user an approximate
visual sense of saliency, according to j.p. lewis, a researcher
at the university of southern california. ... beyond file management,
the icons system could be used for systems like air-traffic control,
said lewis. ... the system 'exploits the fact that appearance is
efficiently learned, searched and remembered, probably more so than
file names,' said lewis. 'psychological research has shown that
searching for a picture among other pictures is faster than searching
for a word among other words.' the bottom line is that interfaces
need scenery, said lewis."
>>> interfaces, cognitive
science, applications
september 4, 2004: brain
research? pay it no mind. mystery of consciousness still outwitting
scientists. by philip marchand. the toronto star. "scientists
who have been trying to understand the brain have recently tried
to measure neural activity of republicans and democrats to see if
political affiliations had anything to do with brain chemistry.
the results were inconclusive. ... what really caught my eye about
a new york times magazine article on the topic was the following
statement: 'one of the most celebrated insights of the past 20 years
of neuroscience is the discovery -- largely associated with the
work of antonio damasio -- that the brain's emotional systems are
critical to logical decision-making. people who suffer from damaged
or impaired emotional systems can score well on logic tests but
often display markedly irrational behaviour in everyday life.' i'm
sure damasio has done good work, rooting around the neocortex. but
what does it say for neuroscience that one of its 'most celebrated
insights' is something we've known for three or four millennia?
... the bravest of the neuroscientists are trying to tackle the
toughest nut of all, the mystery of consciousness. ... a professor
named howard gardner, for example, whose 1985 book the mind's new
science helped to popularize the field of cognitive science, told
horgan that questions such as consciousness and free will were 'particularly
resistant' to the scientific habit of trying to break down a subject
into its most elemental parts, like neural pathways in the brain.
... the human brain is so complex it simply defies the same kind
of analysis that scientists devote to subatomic particles or human
immune systems. 'like neuroscientists, researchers in evolutionary
psychology and artificial intelligence are both bumping up against
the humpty dumpty dilemma,' [john] horgan writes. 'they can break
the mind into pieces, but they have no idea how to put it back together
again.'"
>>> emotion, creativity,
cognitive science, reasoning,
philosophy, neural
networks, machine learning
august 26, 2004: science
at the edge, edited by john brockman. book review by paul nettleton.
the guardian. "a stellar cast of thinkers tackles the really
big questions facing scientists in a book developed from pieces
that first appeared on the web forum edge (www.edge.org). betraying
that they were written for the screen, a leading role is given to
the computer and the potential for machine intelligence. brockman,
whose big black hat gives away his day job is as literary agent
to scientists-turned-bestselling authors, argues in his introduction
that his contributors have broken down the barrier of cp snow's
two cultures and found - echoes of tony blair - a third way. a number
of chapters also echo the writers' latest books."
>>> cognitive science
august 3, 2004: mapping
the physical and mental universes. editorial by narayani ganesh.
the times of india. "if the manual of life is encoded in our
dna, where do we look to find the blueprint of consciousness? this
was a subject that fascinated francis crick, who, along with james
watson, discovered the double-helix structure of dna 50 years ago.
... this is the information age, thanks to the giant leaps we've
made in computer chip technology. david chalmers, of the department
of philosophy, university of arizona, raises a complex futuristic
question: if the precise interactions between our neurons could
be duplicated with silicon chips, would it give rise to the same
conscious experience? can consciousness arise in a complex, synthetic
system? in other words, can consciousness some day be achieved in
machines?"
>>> philosophy, cognitive
science
july 14, 2004: computer
brains. e4engineering.com. "a team of computer scientists
and mathematicians at palo alto, ca-based artificial development
are developing software to simulate the human brain's cortex and
peripheral systems. as a first step along the way, the company recently
disclosed that it has completed the development a realistic representation
of the workflow of a functioning human cortex. dubbed the ccortex-based
autonomous cognitive model ('acm'), the software may have immediate
applications for data mining, network security, search engine technologies
and natural language processing."
>>> neural networks &
connectionist systems, machine
learning, natural language processing,
cognitive science, data
mining, information retrieval,
networks, applications
july 4, 2004: programming
doesn't begin to define computer science. by jim morris ["professor
of computer science and dean of carnegie mellon university's west
coast campus']. pittsburgh post-gazette. "the tech meltdown
affecting computer jobs as well as stock prices, and the stories
about off-shoring of programming jobs, have caused a decline in
computer science enrollments at colleges and universities across
the country. this wouldn't happen if people understood the real
goals of computer science. ... the current approaches to computer
science education fail to teach the science of computing. as a result,
they fail to inspire the very best and brightest young minds to
enter the field. computer science is faced with scientific challenges
that rival any in history, yet are relevant to practical problems
of today. computer science involves questions that have the potential
to change how we view the world. for example: what is the nature
of intelligence, and can we reproduce it in a machine? ... or, how
can one predict the performance of a complex system? ... or, what
is the nature of human cognition.... or, does the natural world
'compute'? ... computer science education is not just training for
the computer industry. a computer science program is a great preparation
for many careers: business, law, medicine, biology -- any field
touched by computing. ... how does computing fit into the world?
the computer is becoming the interface between people and their
world. computer scientists must know enough history and social science
to chart and predict the impact of computers on the intersecting
worlds of work, entertainment and society. to do this, they must
understand the modern world and its roots. to participate in today's
debates about privacy, one must understand both computers and society."
>>> computer science,
resources for students, ai
overview, ethical & social
implications, cognitive science,
applications
june 14, 2004: computing
needs a grand challenge. by lucy sherriff. the register. "sir
tony hoare - british computing pioneer and senior scientist at microsoft
research - believes the computer industry needs a "grand challenge"
to inspire it. in the same way that the lunar challenge in the 1960s
sparked a decade of collaborative innovation and development in
engineering and space technology, or the human genome project united
biologists around the globe, so too must computer scientists pull
together on such a scale to take their industry to its next major
milestone. ... one of the grand challenges, then, is to re-write
the basic foundations of the science, to find a theory of computation
that is 'more realistic than the turing model, and can take into
account the discoveries of biology, and the promise of the quantum
computer'.... 'an ultimate joint challenge for the biological and
the computational sciences is the understanding of the mechanisms
of the human brain, and its relationship with the human mind,' he
says. '... this challenge is one that has inspired computer science
since its very origins, when alan turing himself first proposed
the turing test as a still unmet challenge for artificial intelligence.'"
for more information
see: grand
challenges for computing research - sponsored by the uk computing
research committee, with support from epsrc and nesc.
>>> ai
overview, systems, cognitive
science, artificial life, turing
test, alan turing (@ namesakes)
june 14, 2004 [issue date]: innovators
/ artificial intelligence: forging the future - rise
of the machines - these visionaries are making robots that can
perform music, rescue disaster victims and even explore other planets
on their own. by dan cray, carolina a. miranda, wilson rothman, toko
sekiguchi. time magazine. "the bionic engineer - driving
school on mars. television critics will tell you that the bionic
woman was just another cheesy '70s sci-fi series, but for ayanna howard
it was a springboard to a career. when she was 12 years old, she became
so captivated by the show's cyborg premise that she started reading
books that reaffirmed the concept of integrating machines with humans.
a thousand reruns and an electrical-engineering ph.d. later, she's
creating robots that think like humans for nasa's jet propulsion laboratory.
... three years ago, hoping to encourage others to follow in her footsteps,
howard launched a math-and-science mentoring program for at-risk junior
high school girls. ... howard hopes the program will help steer more
young women into robotics, a field she says that within a decade will
produce robots that mimic human thought processes. ... the swarm
keeper - metal insects on wheels. when james mclurkin was a high
school junior on long island, n.y., he built his first robot: a toy
car that he rigged with a keypad, an led display and a squirt gun.
... now a graduate student in computer science at m.i.t., the young
scientist is on the forefront of developing 'swarmbots'--packs of
dozens of small robots that communicate with one another and work
in harmony to complete an assignment. they have no centralized command
system and can cover vast terrain; if one is destroyed, others fill
in. ... rescuer by remote - need help? send in the robot.
within 24 hours of the 9/11 attacks on the world trade center, robin
murphy was on the scene with a team of robots to help sort through
the debris. it was the first real-world test of the center for robot-assisted
search and rescue in tampa, fla., the only unit of its kind on the
planet. ... the mimic maker - the android who learned to dance.
mitsuo kawato is fascinated with the brain -- so he helped build one.
the biophysics engineer and computer researcher led a team at the
advanced telecommunications research institute international in kyoto,
japan, that spent five years constructing a humanoid equipped with
artificial intelligence. completed in 2001, the 6-ft. 2-in., 175-lb.
robot was named dynamic brain, or db for short. says kawato: 'we built
an artificial brain hoping that it'll help us understand the real
one.' ... so far, the robot has acquired about 30 skills, including
juggling, air hockey, yo-yoing, folk dancing and playing the drum."
>>> ai overview, space
exploration, neural networks,
reasoning, robots,
multi-agent systems, artificial
life, military, hazards
& disasters, applications,
machine learning, cognitive
science, careers in ai
(@ resources for students)
june 10, 2004: christopher
longuet-higgins - cognitive scientist with a flair for chemistry.
obituary by chris darwin.the guardian. "christopher longuet-higgins,
who has died aged 80, was not only a brilliant scientist in two distinct
areas - theoretical chemistry and cognitive science - but also a gifted
amateur musician, keen to advance the scientific understanding of
the art. ... in 1967, as a result of a growing interest in the brain
and the new field of artificial intelligence, christopher made a dramatic
change in direction and moved to edinburgh to co-found the department
of machine intelligence and perception, together with richard gregory
and donald michie. it was christopher who, in 1973, was the first
to name this field more broadly as 'cognitive science'. ... as time
went on, tensions arose between the founding members of the department
at edinburgh - partly a reflection of intellectual differences regarding
the future direction of artificial intelligence - which resulted in
a contentious review of the field by christopher's old wykehamist
colleague sir james lighthill. at the instigation of stuart sutherland,
christopher made the decision to move to the experimental psychology
department at sussex university. there, he continued his work in cognitive
science and made major contributions in vision, language production
and music perception."
>>> cognitive science,
tributes, history,
academic departments
(@ resources for students)
june 10, 2004: brain
learns like a robot - scan shows how we form opinions. by tanguy
chouard. nature science update. "researchers may have pinpointed
the brain regions that help us work out good from bad. and their results
suggest that humans and robots are more alike than we may care to
admit, as both use similar strategies to make value judgements. ...
the team also plotted brain activity on a graph to give a mathematical
description of processes that underlie the formation of value judgements.
the patterns they saw resembled those made by robots as they learn
from experience. 'the results were astounding,' says study co-author
peter dayan. 'there was an almost perfect match between the brain
signals and the numerical functions used in machine learning,' he
says. this suggests that our brains are following the laws of artificial
intelligence."
>>> cognitive science,
machine learning, robots
june 7, 2004: brain-mimicking
circuits to run navy robot. by charles choi. united press international.
"researchers in new york city are teaming with the u.s. navy
and scientists in russia to build electronic circuits that mimic the
brain, producing an agile controller that can maneuver robot vehicles
with speed and precision. the devices are based on a circuit in the
cerebellum, the part of the brain that helps organize the body's motions.
specifically, the new technology imitates the olivocerebellar circuit,
which controls balance and limb movement. ... 'controls in robotics
are for the most part algorithmic,' [rodolfo llinas] explained. 'it's
basically software, and the software instructions are written in a
particular order -- you follow a particular set of steps.' in addition,
the computations are contained in a system that is distinct from the
one it controls. 'the nervous system, on the other hand, is not algorithmic,'
llinas said. the same cells that gather the sensory data from the
muscles also have a key role in operating the muscles as well, so
both sensory and motor systems are wedded together, 'unlike what happens
in digital computers.' so the researchers are developing analog circuits....
the new controller, like the olivocerebellar circuit, is made up of
clusters that interact electronically with one another."
>>> autonomous vehicles,
cognitive science, robots,
applications
june 7 - 14, 2004 [issue date]: the
ultimate remote control - one day, our brains might be able to
beam our very thoughts wirelessly to the machines around us by carl
zimmer. newsweek (international edition) / available from msnbc. "where
computers use zeros and ones, neurons encode our thoughts in all-or-nothing
electrical impulses. and if computers and brains speak the same language,
it should be possible for the two to speak to each other. ... imagine
a quadriplegic person able to operate a robotic arm mounted on a wheelchair
with merely a thought. imagine a digital stream flowing from a microphone
into a deaf person's auditory cortex, where it could become the perception
of sound. these dreams have an official name: brain-machine interfaces.
... at the center for neuroengineering at duke university, monkeys
with electrodes surgically implanted in their brains move robotic
arms with their minds alone."
>>> cognitive science,
interfaces
june 4, 2004: programs
of the mind. review by gary marcus. science magazine (subscription
required). "eric baum's what is thought? [mit press, cambridge,
ma, 2004], consciously patterned after [erwin] schrödinger's book
[what is life?], represents a computer scientist's look at the mind.
baum is an unrepentant physicalist. he announces from the outset that
he believes that the mind can be understood as a computer program.
much as schrödinger aimed to ground the understanding of life in well-understood
principles of physics, baum aims to ground the understanding of thought
in well-understood principles of computation. in a book that is admirable
as much for its candor as its ambition, baum lays out much of what
is special about the mind by taking readers on a guided tour of the
successes and failures in the two fields closest to his own research:
artificial intelligence and neural networks. ... advocates of what
the philosopher john haugeland famously characterized as gofai (good
old-fashioned artificial intelligence) create hand-crafted intricate
models that are often powerful yet too brittle to be used in the real
world. ... at the opposite extreme are researchers working within
the field of neural networks, most of whom eschew built-in structure
almost entirely and rely instead on statistical techniques that extract
regularities from the world on the basis of massive experience."
>>> ai overview, cognitive
science, philosophy, neural
networks, machine learning
may 26, 2004: small
world networks key to memory. by philip cohen. new scientist news
(also appears in the may 22nd issue of new scientist magazine: memories
are made of small worlds, page 12). "if you recall this sentence
a few seconds from now, you can thank a simple network of neurons
for the experience. that is the conclusions of researchers who have
built a computer model that can reproduce an important aspect of short-term
memory. the key, they say, is that the neurons form a 'small world'
network. small-world networks are surprisingly common. human social
networks, for example, famously connect any two people on earth -
or any actor to kevin bacon - in six steps or less. ... 'the philosophical
conclusion is that connectivity matters,' says [northwestern university]
team member sara solla. 'our model uses only a simple caricature of
neurons, yet this network shows this working memory-like behaviour.'
... they found that when 10 to 20 per cent of the neurons participated
in short cuts, the network formed self-sustaining loops of activity."
>>> neural networks, cognitive
science, machine learning
may 20, 2004: a
design epiphany - keep it simple. by jessie scanlon. the new york
times (no fee reg. req'd.). "dr. [john] maeda says the solution
is not better design or better technology but a better partnership
between the two. hence the simplicity design workshop, which could
leverage the lab's understanding of emerging technologies and the
real-world experience of the designers into a series of concrete,
well tested principles. ... in january mr. [bill] moggridge of ideo
met with a media lab group led by cynthia breazeal, an assistant professor
of media arts and sciences, to try to define simplicity. it was easy
to embrace the concept, with its connotations of beauty and elegance
and its promise of a better way, but what did it mean in practical
terms? ... 'we started with the big picture: what does simplicity
mean in the context of our work?' said dr. breazeal, a pioneer of
social robotics whose current project is building a learning companion
robot called roco. 'but the real value is to see how bill approaches
the problem of design.'... a third arm of research focuses on making
computers smarter. one method, a new branch of artificial intelligence,
aims to give computers common sense in the form of a vast database
of mundane truths about the world (the sky is blue, coffee wakes you
up). a second approach, affective computing, gathers information about
the state of the user through a range of sensors, enabling the computer
to adapt by, say, holding delivery of all but high-priority e-mail
when it detects stress."
>>> robots, interfaces,
commonsense, emotion,
cognitive science, reasoning,
representation
may 5, 2004:
united states senate committee on appropriations, defense subcommittee
hearing with public witnesses - testimony of christopher sager,
american psychological association. "although i am sure you are
aware of the large number of psychologists providing clinical services
to our military members here and abroad, you may be less familiar with
the extraordinary range of research conducted by psychological scientists
within the department of defense. ... office of naval research (onr)
the cognitive and neural sciences division (cns) of onr supports research
to increase the understanding of complex cognitive skills in humans;
aid in the development and improvement of machine vision; improve human
factors engineering in new technologies; and advance the design of robotics
systems. an example of cns-supported research is the division’s long-term
investment in artificial intelligence research. this research has led
to many useful products, including software that enables the use of
'embedded training.' many of the navy’s operational tasks, such as recognizing
and responding to threats, require complex interactions with sophisticated,
computer-based systems. embedded training allows shipboard personnel
to develop and refine critical skills by practicing simulated exercises
on their own workstations. once developed, embedded training software
can be loaded onto specified computer systems and delivered wherever
and however it is needed."
>>> education, intelligent
tutoring systems, military,
vision, cognitive
science, robots, applications
may 5, 2004: robot
sex - sure, they're only machines. but the more they interact with
us humans, the more important their apparent gender becomes. the net
effect column by simson garfinkel. technology review. "is your
roomba a boy or a girl? ... 'it’s a girl,' says my wife. 'it’s round.
it’s close to the floor. it ends with an ‘a’. i always think of it as
a ‘wom-ba.’' but if the roomba is a girl, then asimo is definitely a
boy. ... whether or not you think that gender belongs in our mechanical
creations has a lot to do with your vision of how these creatures will
fit into our future. it certainly takes more effort to make a robot
that’s gendered than one that’s asexual. but engineers just want to
have fun. building gender into robots might be a way for the robots’
designers to express their own playfulness and creativity. dig a little
deeper, though, and you’ll discover another reason why gender might
be a good thing for our robot servants: gender will make robots more
compatible with their human masters. as human beings, we constantly
try to layer emotions, desires, and other human qualities onto our machines.
... [i]f you are interested in building an effective interface between
humans and computers, you might just be better off creating a machine
that projects a simulated emotional response. ... can you have sociability
without gender?"
>>> robots, interfaces,
emotion, cognitive
science, history
may 5, 2004: brain
fingerprinting. the washington post hosted an online discussion
with neuroscientist lawrence a. farwell, ph.d., filmmaker michael epstein
and series producer jared lipworth to discuss the pbs innovation documentary.
"[question from] laurel, mont.: how much from
the brain can we learn to help us develop artificial intelligence? jared
lipworth: much work is being done in various parts of the world
to use what we know about the brain for the development of artificial
intelligence. some researchers are trying to build ai machines from
the bottom up--using simple processes to perform complex tasks. others
take the opposite approach, trying to build machines that can mimic
the brain. these efforts are still a long way from producing a machine
with the compexity of the human brain, but everything researchers learn
about the brain helps. artificial intelligence is an area innovation
is following closely, so some time in the near future you may see a
program that delves into exactly the question you asked."
>>> cognitive
science, ai overview, neural
networks
may 3, 2004: facing
facts in computer recognition. the elements of a face can be hard
for computers -- and for some people -- to recognize. by byron spice.
pittsburgh post-gazette. "neuropsychologists debate whether people
have an inborn ability to recognize faces, or whether it is a skill
that develops from earliest infancy. it is a task of such difficulty
and importance, however, that the brain has one area that is largely
devoted just to faces. ... [henry] schneiderman said computers have
less trouble telling the difference between faces than they do simply
picking out faces from other objects in an image. in developing a face
detection program, schneiderman and other computer vision researchers,
such as former robotics institute director takeo kanade, can't tell
the computer precisely what a face is supposed to look like. so part
of the development process involves showing the computer examples of
faces and non-faces and letting the computer program gradually develop
its own statistical rules for determining what constitutes a face. no
one knows how the human brain represents images, but computers use numbers,
with each number representing one point, or pixel, in an image. in black
and white images, the larger the number, the brighter the pixel. ...
schneiderman's face detector has been exhibited at the science museum
of minnesota and next week will be one of the technologies featured
at wired magazine's nextfest exhibition in san francisco. the face detector
is being exhibited as a security technology; presumably it might be
used to detect people who are in secure areas, or to pick out faces
for identification in crowds. but schneiderman noted its first use was
in photo processing. ... eventually, schneiderman envisions it being
used to organize and search images produced by digital cameras."
>>> vision, cognitive
science, image understanding,
pattern recognition, law
enforcement, information retrieval,
applications, machine
learning, exhibits
(@ resources for students)
april 17, 2004: the
semantic engineer - profile: daniel dennett. by andrew brown. the
guardian. "it was at oxford, too, that he first became interested
in computers and the brain. the oxford philosopher john lucas had published
a paper - still famous - arguing that gödel's theorem disproved any
theory that humans must be machines, and that human thought could be
completely simulated on a computer. this is the position dennett became
famous for attacking. ... the essential doctrine that dennett took from
quine was that knowledge - and philosophy - had to be understood as
natural processes. they have arisen as part of the workings of the ordinary
world, which can be scientifically studied, and are not imposed or injected
from some supernatural realm. so there is nothing magical about human
brains - no ghost in the machine, to use ryle's phrase. when we talk
about 'intelligence' we are describing behaviour, or a propensity towards
certain behaviour, and not the exercise of some disembodied intellect.
how these propensities arise is an empirical question, to be answered
by looking at the engineering involved in brains (or computers) and
philosophers who don't do this can't be serious.... he's famous among
philosophers as an extreme proponent of robot consciousness, who will
argue that even thermostats have beliefs about the world. ... 'somehow,
you've got to reduce the [inner] representation, and the representation
understanders, to machinery. and a computer can do that. that's the
great insight. turing saw that ai [artificial intelligence] might not
be the way the brain did it in many regards. but it was a way of reducing
semantic engines to syntactic engines. our brains are syntactic engines.
they have to be, because they're just mechanisms. but what they do is
they extract meaning from the world. hence they're semantic engines.
well, how can they be semantic engines? how could there be a semantic
engine?' ... what matters to him is that consciousness arises from what
the brain does - its work as a 'syntactic engine' - not from what it
is made of. ... 'conscious robot is not an oxymoron - or maybe it was,
but it's not going to be for much longer. how much longer? i don' t
know. turing [50 years ago] said 50 years, and he was slightly wrong,
but the popular imagination is already full with conscious robots.'"
>>> philosophy, nature
of intelligence, cognitive science,
ai overview, representation,
turing (@ namesakes),
robots
april 11, 2004: machine
rage is dead ... long live emotional computing. consoles and robots
detect and respond to users' feelings. by robin mckie. the observer.
"computer angst - now a universal feature of modern life - is an
expensive business. but the days of the unfeeling, infuriating machine
will soon be over. thanks to break throughs in ai (artificial intelligence),
psychology, electronics and other research fields, scientists are now
creating computers and robots that can detect, and respond to, users'
feelings. the discoveries are being channelled by humaine, a £6 million
programme that has just been launched by the eu to give europe a lead
in emotional computing. as a result, computers will soon detect our
growing irritation at their behaviour and respond - by generating more
sympathetic, human-like messages or slowing down the tempo of the games
they are running. robots will be able to react in lifelike ways, though
we may end up releasing some unwelcome creations - like hal, the murderous
computer of the film 2001: a space odyssey . 'computers that can detect
and imitate human emotion may sound like science fiction, but they are
already with us,' said dr dylan evans, of the university of the west
of england and a key humaine project collaborator. ... a key breakthrough
has been the discovery that cool, unemotional decision-making is not
necessarily a desirable attribute. in fact, humans cannot make decisions
unless they are emotionally involved. 'the cold, unemotional mr spock
on star trek simply could not have evolved,' said artificial intelligence
expert professor ruth aylett of salford university, another humaine
project leader."
>>> emotion, interfaces,
applications, cognitive
science, assistive technologies,
robotic pets, video
games, robots, reasoning,
systems
march 27, 2004: the
isaac newton of logic - it was 150 years ago that george boole published
his classic the laws of thought, in which he outlined concepts that
form the underpinnings of the modern high-speed computer. by siobhan
roberts. the globe and mail (page f9). "it was 150 years ago that
george boole published his literary classic the laws of thought, wherein
he devised a mathematical language for dealing with mental machinations
of logic. it was a symbolic language of thought -- an algebra of logic
(algebra is the branch of mathematics that uses letters and other general
symbols to represent numbers and quantities in formulas and equations).
in doing so, he provided the raw material needed for the design of the
modern high-speed computer. his concepts, developed over the past century
by other mathematicians but still known as 'boolean algebra,' form the
underpinnings of computer hardware, driving the circuits on computer
chips. and, at a much higher level in the brain stem of computers, boolean
algebra operates the software of search engines such as google. ...
the most basic and tangible example is the machinations of boolean searches,
which operate on three logical operators: and, or, not. algebra gets
factored in to this logical equation when boole designates a multiplication
sign (x) to represent 'and,' an addition sign (+) to represent 'or,'
and a subtraction sign (-) to represent 'not.' ... the same 'and' gates
and 'or' gates drive computer circuitry, with streams of electrons performing
boole's algebraic operations -- a computer's bits and bytes operate
on the binary system, as does boole's algebra. he employs the number
1 to represent the universal class of everything (or true) and 0 to
represent the class of nothing (false). ... with his phd in artificial
intelligence, it might appear that ['geoffrey hinton, a computer-science
professor at the university of toronto and his great-great-grandson']
followed after boole. but in fact, he says, 'i'm entirely on the other
side.' the field of artificial intelligence, in its early years circa
1950-60, was committed to the boolean idea that symbols effectively
represent human reasoning. since the eighties, however, artificial intelligence
has come to see human reasoning as not purely logical. rather, it is
more about what is intuitively plausible. 'boole thought the human brain
worked like a pocket calculator or a standard computer,' prof. hinton
says. 'i think we're more like rats.'"
>>> systems & languages,
history, logic,
boole (@ namesakes),
reasoning, web-searching
agents, cognitive science, information
retrieval
march 23, 2004: fit
speaker to discuss computers' intelligence. by alex mcpeak. daily
helmsman. "one of the major challengers of artificial intelligence
will speak in the zone at the fedex institute of technology at 1:30
p.m. wednesday. john searle, mills professor of the philosophy of mind
and language at the university of california at berkeley, will discuss
consciousness, causation, reduction and the symbol grounding problem
-- tongue-twister concepts that confront whether a computer can ever
understand what it is doing. ... the author of 13 books related to cognitive
science, searle is best known for his chinese room thought experiment,
which challenged the idea of a computer ever achieving true intelligence
and understanding. the chinese room proposed that if a person were given
chinese characters with which to interpret chinese writings in a room,
that person could match characters to understand what was written on
the walls. ... searle was the top pick for the cognitive science seminar
this semester, [lee] mccauley said. the seminar will look at responses
to searle's intellectual challenge and the systems that claim to answer
it. ... the culmination of the cognitive science seminar this semester,
he said, was to set up criteria to prove if artificial intelligence
can really answer the chinese room challenge."
>>> philosophy, turing
test, cognitive science, ai
overview
march 18, 2004: a
grand plan for brainy robots. by nick dermody. bbc news. "on
a good day, lucy can tell a banana apart from an apple. and that's handy
skill to have if you are an orang-utan. even a robotic one. it might
not sound like much to a too-clever-to-know-it human like you or me,
but it represents pioneering work in the field of artificial intelligence.
... by going back to first principles, this self-taught scientist[steve
grand] has created one of the most advanced robot 'brains' in the world.
his baby, lucy, may not be much to look at, but she represents perhaps
the best example yet of how far we can get computers to 'think' for
themselves - one of the most advanced artificial life-forms in existence.
... [h]e is still waiting for the key breakthrough, the one sentence
or 'formula' for describing what the brain - and its intelligence -
is actually for. 'until we've got that, we will never be able to make
artificial intelligence,' he said."
>>> robots, neural
networks, cognitive science, machine
learning
march 5, 2004: robo
doc. by jon excell. the engineer / e4engineering.com. "it is
tempting to view the robot simply as a clever marketing tool, and as
a sophisticated showcase for honda's technical skill its impact is undeniable.
but the diminutive android is much more than an impressive branding
exercise. prof edgar korner, the company's robotics and artificial intelligence
(ai) supremo, insists that asimo represents a key step towards the era
of the domestic robot. ... in the longer term, korner claimed, it is
the technologies that we broadly define as ai that require the most
work. 'asimo is a marvellous walking machine, a masterpiece of engineering,'
he said. 'but the next stage is to enable it to develop the ability
to 'think' for itself, to an extent where it can get on with its chores
without bothering its owner.' ... the further development of ai will,
claimed korner, be made possible by ongoing advances in the understanding
of human and animal brains. ... in the shorter term, technology developed
for asimo is already having some interesting spin-off applications.
... honda's work on machine intelligence is now being used to develop
an accident-prevention system for cars. ... some have claimed that there
is a sense in which humanoid robot development - and more specifically
ai - occupies a similarly ambiguous moral space to genetic engineering
or nanotechnology, with scientists developing technology that has the
potential to completely change the way we think about the world. korner
does not agree. 'from the point of ethics honda was very careful to
stress from the beginning that this is a machine. this is not intended
to copy a human. the message is that we don't want to copy humans, we
want to create a useful machine for serving humans.'"
>>> cognitive science, machine
learning, nature of intelligence,
ethical & social implications,
robots, assistive
technologies, transportation,
applications
february 29, 2004: artificial
emotion. by sam allis. boston globe / available from boston.com.
"sherry turkle is at it again. this friday, she's hosting a daylong
powwow at mit to discuss 'evocative objects.' ... over the past two
decades, the founder of the mit initiative on technology and self has
been watching how our relationships with machines, high tech and low
tech, develop. turkle is best known for her place at the table in any
discussion of how computers -- and robots in particular -- will change
our lives. this makes her an essential interlocutor in the palaver,
sharpened two years ago by a piece written by sun microsystems cofounder
bill joy, that robots are going to take over the world, soon. 'the question
is not what computers can do or what computers will be like in the future,'
she maintains, 'but rather, what we will be like.' what has become increasingly
clear to her is that, counterintuitively, we become attached to sophisticated
machines not for their smarts but their emotional reach. 'they seduce
us by asking for human nurturance, not intelligence,' she says. ...
the market for robotics in health care is about to explode, turkle says.
the question is: do we want machines moving into these emotive areas?
'we need a national conversation on this whole area of machines like
the one we're having on cloning,' turkle says. 'it shouldn't be just
at ai conventions and among ai developers selling to nursing homes.'"
>>> ethical & social implications,
robotic pets, emotion,
assisitive technologies, cognitive
science, robots, applications
february 8, 2004: mind
over gray matter - york philosopher's new book explores controversial
relationship between culture and consciousness. book review by olivia
ward. toronto star. "[david martel] johnson's newly published book,
how history made the mind, goes to the heart of a scientific controversy
between those who believe the physical brain is the most important factor
in development of the mind, and those who believe culture is the determining
factor. ... johnson's theory takes its place in the relatively new discipline
of cognitive science, the study of the mind and how it works. launched
only 50 years ago, the field is a catch-all for mathematicians, psychologists,
linguistics specialists, anthropologists, biologists and artificial
intelligence experts as well as philosophers. ... in johnson's view,
it took some 100,000 years or more before mankind first formed the kind
of abstract thoughts that led to painting on cave walls, fashioning
jewellery and designing complicated tools. 'before that time people
thought in very concrete terms, not in symbols,' he says. 'they hunted
prey, mastered survival and buried their dead, just as the neanderthals
did.' it's a theory opposed by strict followers of charles darwin, who
believe that because of their large brains, the first humans were capable
of the same thought processes we know today as soon as they evolved
from apes. ... according to [julian] jaynes, a new kind of thought arose
because all the accumulated experience of the past wasn't enough to
help people cope with the increasingly sophisticated societies that
were taking root at that time. a new kind of thinking was required,
one that looked at the world objectively. the greeks rose to the challenge
and developed 'conscious thought.' however, says johnson, 'it's an exciting
theory, but it's wrong. after all, a dog has consciousness. so did early
man. he may have been different from us, but he wasn't that different.'
johnson's historically based theories may be less popular than some
of the prevailing ones -- such as noam chomsky's 'computationalism,'
that the brain is a kind of genetically determined computer."
>>> cognitive science, nature
of intelligence, philosophy, emotions,
reasoning, machine
learning
february 4, 2004: pentagon
kills lifelog project. by noah shachtman. wired news. "the
pentagon canceled its so-called lifelog project, an ambitious effort
to build a database tracking a person's entire existence. ... lifelog's
backers said the all-encompassing diary could have turned into a near-perfect
digital memory, giving its users computerized assistants with an almost
flawless recall of what they had done in the past. but civil libertarians
immediately pounced on the project when it debuted last spring, arguing
that lifelog could become the ultimate tool for profiling potential
enemies of the state. ... lifelog would have addressed one of the key
issues in developing computers that can think: how to take the unstructured
mess of life, and recall it as discreet episodes -- a trip to washington,
a sushi dinner, construction of a house. 'obviously we're quite disappointed,'
said howard shrobe, who led a team from the massachusetts institute
of technology artificial intelligence laboratory which spent weeks preparing
a bid for a lifelog contract. 'we were very interested in the research
focus of the program ... how to help a person capture and organize his
or her experience. this is a theme with great importance to both ai
and cognitive science.'"
>>> applications, cognitive
science, representation, reasoning,
agents, data
mining, ethical & social implications
january 30, 2004: u
of m starts new company for research inventions. by scott shepard.
memphis business journal. "as artificial intelligence goes from
science fiction to an everyday tool, the scientists who are at the center
of it aim to keep it closer to home. ... that's the intent of iidsystems,
a business being developed at the university of memphis in conjunction
with the technology resources foundation to commercialize the university's
technology and encourage small businesses to form in memphis. ... or,
iidsystems could own a suite of integrated products. one candidate for
that is epal, which will integrate several forms of artificial intelligence
to create a personal teaching mentor, with a talking head on the computer
screen. 'maybe we can combine all of our intelligent systems, and not
just those for learning,"'[eric] mathews says. ... the u of m is on
the cusp of churning out a wide array of learning tools in the next
few years. there are concepts that teach critical, creative thinking,
and systems that can read and react to human emotion. technology development
is beginning to slip out of the hands of technocrats, [art] graesser
says, and that's good. 'we already know that when you put a cd in your
computer and you hit a glitch and get stuck, 98% of the time you stay
stuck right there and that cd ends up on a pile,' he says. 'it's counterintuitive,
but cognitive psychologists now develop a lot of software. we're not
all freudians; if you're going to design something that people use,
you have to know a lot about how people think.'"
>>> applications, education,
intelligent tutoring systems, emotion,
cognitive science
january 16, 2004: yale
holds discussion on computers, emotions, and artificial intelligence.
by laura young. the yale herald (volume xxxvii, number 1). "laura
is part of an endeavor to push the limits of human-computer relations
-- a computerized personal trainer designed to build a longterm, social-emotional
relationship with her user as she jokes, coaches, and converses with
him. she's the next step in the attempt to build emotional computers
-- machines capable of having relationships with human beings. laura
was just one of the programs demonstrated by rosalind w. pickard in
her workshop on wed., jan. 15. 'towards computers that recognize and
respond to human emotion' was part of a series sponsored by the technology
and ethics working group, devoted to the question of computers' capability
for emotional intelligence. ... picard feels that the interaction between
humans and computers is natural and social, but that its current state
is more frustrating than anything else. she likened the human-computer
relationship to the relationship between a driver and an annoying passenger
who just cannot understand how the other person feels."
>>> emotion, interfaces,
cognitive science, natural
language processing
january 15 -21, 2004: my
service bot. techsploits column by annalee newitz. metro. "[peter]
plantec's book is a guide for creating what he calls v-people: social
bots that businesses can use to replace service workers or game players
online. programmed ask jeeves-style to answer questions in a way that
sounds natural and to deploy friendly facial expressions at the right
moments, v-people are the bank tellers and customer-service reps of
the future. according to plantec and researchers like cory kidd at mit,
people warm up to v-people fairly quickly after their initial moment
of disbelief that the person talking to them and smiling is just a program.
kidd conducted a series of psychological experiments last year showing
that people respond to animated and automated creatures in almost exactly
the same way they respond to humans. ... plantec writes in his book
that his main concern about the ethics of using v-people in customer
service situations is that users tend to credit machines with more honesty
and innocence than they do their fellow humans. in trial runs of his
v-people, he reports that users 'took what the v-person said as truth
or error but never considered that the character was trying to deceive
them. ... after all, how could a virtual human have ulterior motives
... how could they have any motives at all?'"
>>> cognitive science, interfaces,
customer service, chatbots
(@ natural language processing),
ethical & social implications, robots,
applications
january 2004: why
machines should fear - once a curmudgeonly champion of "usable"
design, cognitive scientist donald a. norman argues that future machines
will need emotions to be truly dependable. by w. wayt gibbs. scientific
american. "'the cognitive sciences grew up studying cognition--rational,
logical thought,' he notes. norman himself participated in the birth
of the field, joining a program in mathematical psychology at the university
of pennsylvania and later helping to launch the human informationprocessing
department (now cognitive science) at the university of california at
san diego. 'emotion was traditionally ignored as some leftover from
our animal heritage,' he says.' 'it turns out that's not true. we now
know, for example, that people who have suffered damage to the prefrontal
lobes so that they can no longer show emotions are very intelligent
and sensible, but they cannot make decisions.' although such damage
is rare, and he cites little other scientific evidence, norman concludes
that 'emotion, or 'affect,' is an information processing system, similar
to but distinct from cognition. with cognition we understand and interpret
the world--which takes time,' he says. 'emotion works much more quickly,
and its role is to make judgments--this is good, that is bad, this is
safe.' ... 'i'm not saying that we should try to copy human emotions,'
norman elaborates. 'but machines should have emotions for the same reason
that people do: to keep them safe, make them curious and help them to
learn.' autonomous robots, from vacuum cleaners to mars explorers, need
to deal with unexpected problems that cannot be solved by hard-coded
algorithms, he argues."
>>> emotion, cognitive
science, interfaces, robots
january 7, 2004: the
ultimate global network - within 20 years computers will be everywhere,
and they'll all be talking to each other. daunting? not if we're prepared,
says a group of british scientists. by richard sarson. the independent.
"to ward off these evils and prepare for the future, [tony] hoare
and [robin] milner are launching a series of 'grand challenges' to the
uk's computer scientists. the seven challenges spin off in different
directions from a single big idea: that all the computers in the world
will become part of one global ubiquitous computer. hoare wants 'to
understand these enormous artefacts, which have rather escaped the control
of their original designers. at one time, the complexity may have been
artificial, but now it is almost natural, rather like the complexity
of organic chemistry.' ... the final challenge moves from basic biology
to 'the architecture of brain and mind'. this will bring together biologists,
brain physiologists, nerve scientists, psychologists, linguists, social
scientists and philosophers to work out how the grey and white mush
of our brain can constitute the most powerful and complicated computer
on the planet: our mind. scientists have been trying to create intelligent
robots for years, with little success. this grand challenge is having
another go at understanding how to do this. ... the challenges will
not end up as instant software tools to run the world. that, says hoare,
is the 'job of the entrepreneur'. but the scientists can provide the
theory behind those tools."
>>> ai overview, systems,
cognitive science, robots,
ethical & social implications
december 28, 2003: mother
of invention - virtual cow fences and self-reconfiguring automatons
are just two of mit roboticist daniela rus's futuristic visions. by
rich barlow. the boston globe /available from boston.com. "[daniela]
rus, who last year won a macarthur 'genius' grant at age 39, invests
her work with quasi-spiritual purpose as well. inventing machines that
build scaffolding and rescue victims -- in short, that act like people
-- 'means to study life, to get an understanding of how we're made up,'
she says. 'understanding life is a great and noble quest, because that's
how we understand ourselves.' ... some roboticists are 'absolutely aghast'
when critics question their brave new world, [rodney] brooks says. rus
invited students to pause and ponder it. the mechanics of building robots
are fine, she says, but arguing big philosophical issues revs students'
passion, so that they just don't 'passively sit back and suck all the
information you give to them.' the climactic project of her artificial-intelligence
classes at dartmouth (one she hopes to continue at mit) assigned debate
teams to duel over such topics as whether robots might rule the world
someday, or the urgency of enacting writer isaac asimov's 'three laws
of robotics,' mandating that robots never harm humans. student carl
stritter's topic was whether artificial-intelligence research would
benefit humanity. 'never before had i heard a professor,' he says by
e-mail, 'after teaching us a subject for 10 weeks, ask the class whether
or not it had been, in essence, a waste of time.'"
>>> robots, cognitive
science, ethical & social implications,
philosophy, agriculture,
hazards & disasters, scifi,
applications
december 2003: the
love machine - building computers that care. by david diamond. wired
magazine. "i have seen the future of computing, and i'm pleased
to report it's all about ... me! this insight has been furnished with
the help of tim bickmore, a doctoral student at the mit media lab. he's
invited me to participate in a study aimed at pushing the limits of
human-computer relations. what kinds of bonds can people form with their
machines, bickmore wants to know. ... bickmore's area of study is called
affective computing. its proponents believe computers should be designed
to recognize, express, and influence emotion in users. rosalind picard,
a genial mit professor, is the field's godmother; her 1997 book, affective
computing, triggered an explosion of interest in the emotional side
of computers and their users. ... and she developed an interest in the
work of neuroscientist antonio damasio. in his 1994 book, descartes'
error , damasio argued that, thanks to the interplay of the brain's
frontal lobe and limbic systems, our ability to reason depends in part
on our ability to feel emotion. too little, like too much, triggers
bad decisions. the simplest example: it's an emotion - fear - that governs
your decision not to dive into a pool of crocodiles."
>>> emotion, reasoning,
interfaces, natural
language processing, cognitive science,
image understanding, pattern
recognition, intelligent tutoring systems,
customer service, education,
assistive technologies, robots,
applications
november 21, 2003: man
vs. computer - still a match. opinion by charles krauthammer. the
washington post. "to most folks, all of this man-vs.-computer stuff
is anticlimax. after all, the barrier was broken in 1997 when man was
beaten, kasparov succumbing to deep blue in a match that was truly frightening.
frightening not so much because the computer won but because of how
it won, making at some points moves of subtlety. and subtlety makes
you think there might be something stirring in all that silicon. it
seems to me obvious that machines will achieve consciousness. after
all, we did, and with very humble beginnings. ... interestingly, in
each game that was won, the loser was true to his nature. kasparov lost
game 2 because, being human, he made a tactical error. computers do
not. ... in game 3 the computer lost because, being a computer, it has
(for now) no imagination. ... in the meantime, kasparov is showing that
while we can't outcalculate machines, we can still outsmart them."
>>> chess, neural
networks, emotion, creativity,
nature of intelligence, philosophy,
games & puzzles, cognitive
science; also see our related newstoon
october 22, 2003: landmark
invention. by scott warren and stephanie brooking. blue mountains
gazette. "forget about the space age, artificial intelligence could
be among us in the near future thanks to a glenbrook man who has developed
a robot prototype able to perform up to 16 tasks at once. the technology,
developed by glenbrook's dr peter hill, allows the robots to modify
their behaviour according to the situation. the program also mimics
a human approach to a problem, launching multiple tasks with any excess
capacity, a problem solving trait commonly attributed to women. 'we
deliberately chose mimic the female rather than the male mind. the distinct
differences in the way women prioritise and work, in particular the
ability to start new tasks while others are still in progress, is important
in this field of producing new technology.' dr hill said."
>>> robots, cognitive
science, reasoning, applications,
systems; also see our related news
toon
october 14, 2003: imagining
thought-controlled movement for humans. by sandra blakeslee. the
new york times (no fee reg. req'd.). "scientists at duke university
reported yesterday in the first issue of the public library of science,
a new journal with free online access at www.publiclibraryofscience.org,
that a monkey with electrodes implanted in its brain could move a robot
arm, much as if it were moving its own arm. ... the ability to make
machines that respond to thoughts rests on some fundamental properties
of the nervous system. the brains of monkeys plan every movement the
body carries out fractions of seconds before the movements actually
occur. motor plans are in the form of electrical patterns which arise
from cells that fire at the same time, from various parts of the brain.
the plans are sent to spinal cord neurons that have direct access to
muscles. only then are movements carried out. to link brains and machines,
researchers place electrodes directly into parts of the brain that produce
motor plans. they extract raw electrical signals that can be translated
mathematically into signals that computers and robots understand."
>>> cognitive science, interfaces,
robots, applications
october 14, 2003: leading
humanity forward. by a. asohan. the star (malaysia). "the whole
idea of linking humans with machines has two aspects to it, says [professor
kevin] warwick. 'first, we’re working with people with spinal injuries,
like the stoke mandeville hospital in britain, to see if the kind of
technologies that we deal with, can help people with one kind of disability
or the other.' ... 'the second aspect is looking at humans as we are
now. can we take technology and by linking with it, create superhumans
give ourselves abilities that we don’t simply have at the present
time? we’re looking at machines and how they are intelligent, and asking
what kind of features they have that we humans do not, and questioning
what we can gain by linking much more closely to machines,' says warwick
inevitably, the most relevant technology in this idea is the computer.
... thus his quest to link the human brain to a machine mind. it’s not
a wholly new idea, but certainly one that found new impetus in the 1980s
with the cyberpunk literary movement. groundbreaking novels like william
gibson’s neuromancer and the increasing pervasiveness of computer
technology in our everyday lives had even the most sober of scientists
asking where our increasing interdependence on technology, and possible
integration with technology, might lead the human race to. ... warwick
has been labelled a prophet of doom by the tabloids, quoted as saying
that machine intelligence would overtake humans in the near-future.
while he has been criticised heavily for it by some members of the scientific
community, on the surface, his dire predictions are reminiscent of those
expressed by others, not the least of whom is bill joy, previously the
chief scientist of us network computer company sun microsystems inc.
... warwick argues that it all depends on how one defines 'intelligence,'
a task he attempted in his book qi: the quest for intelligence.
'to me, intelligence is a very basic thing. in my book qi,
we tried to look at what is intelligence - human intelligence, animal
intelligence, machine intelligence and tried to get the basics of
it. the conclusion that i would come to now is that it’s the mental
ability to sustain successful life.' ... of course, we humans like to
pride ourselves on being conscious, self-aware beings. cogito, ergo
sum i think, therefore i am, said the 17th century philosopher
and mathematician rene descartes. it’s our edge over the machine - it
may process information much faster than us, but it is not aware of
what it is it processes. that edge is no big deal to warwick’s way of
thinking. indeed, he argues that there is no evidence that being conscious
- the way humans are - is an effective protective mechanism."
>>> ai overview, cognitive
science, applications, ethical
& social implications, emotions,
scifi, philosophy,
systems, nature
of intelligence; also see our series but
is it ai? and the summer
2003 & fall
2003 ai in the news columns
september 26, 2003: the
octopus as eyewitness. by michelle delio. wired news. "robots
and people may soon be looking at the world through octopus eyes. albert
titus, an assistant professor of electrical engineering at the university
at buffalo, new york, has created a silicon chip that mimics the structure
and functionality of an octopus retina. his 'o-retina' chip can process
images just like an octopus eye does. the chip could give sight to rescue
or research robots, allowing them to see more clearly than human eyes
can in dark or murky conditions. ... his ultimate goal is to build a
complete artificial vision system, including a brain that mimics the
visual systems of various animals, so humans can look at the world differently.
... 'the visual system is more than eyes,' titus said. 'an animal uses
eyes to see, but the brain to perceive. yet, the retina is an extension
of the brain, so where does the distinction between seeing and perceiving
begin and end?'"
>>> vision,
robots, hazards & disasters,
cognitive science
september 23, 2003: computers'
messages need poetic writers. column by desiree cooper. detroit
free press. "i'm beginning to think that the science fiction writers
were right: machines will take over the world. robotics is making it
possible for cars to drive themselves. in the near future, police will
use robotic dogs to sniff out drugs and biological weapons. robotic
house-helpers now sweep the floor while guarding against intruders.
machines do seem to have everything going for them: artificial intelligence,
nerves of steel, a durable constitution. but they won't stand a chance
at world domination until they improve their people skills. ... as far
as i can tell, the e-mail about the haiku error messages is probably
one of those cyber legends that has been circulating since late in the
last century. still, it struck me as a marvelous idea. in addition to
merging artificial intelligence with machinery, why not add some creative
intelligence as well?"
>>> poetry, creativity,
robots, interfaces,
applications, cognitive
science
september 11, 2003: beyond
voice recognition, to a computer that reads lips. by anne eisenberg.
the new york times (no fee reg. req'd.). "[t]eaching computers
to read lips might boost the accuracy of automatic speech recognition.
listeners naturally use mouth movements to help them understand the
difference between 'bat' and 'pat,' for instance. if distinctions like
this could be added to a computer's databank with the aid of cheap cameras
and powerful processors, speech recognition software might work a lot
better, even in noisy places. scientists at i.b.m.'s research center
in westchester county, at intel's centers in china and california and
in many other labs are developing just such digital lip-reading systems
to augment the accuracy of speech recognition. ... at intel, too, researchers
have developed software for combined audiovisual analysis of speech
and released the software for public use as part of the company's open
source computer vision library, said ara v. nefian, a senior intel researcher
who led the project. ... iain matthews, a research scientist at carnegie
mellon university's robotics institute who works mainly on face tracking
and modeling, said that audiovisual speech recognition was a logical
step. 'psychology showed this 50 years ago,' he said. 'if you can see
a person speaking, you can understand that person better.'"
>>> vision, speech,
natural language processing, applications,
software, cognitive
science
september 2003: the
man who mistook his girlfriend for a robot. by dan ferber. popular
science. "no one asks why, of all the roboticists in the world,
only [david] hanson appears to be attempting to build a robotic head
that is indistinguishable in form and function from a human. no one
points out that he is violating a decades-old taboo among robot designers.
and no one asks him how he's going to do it -- how he plans to cross
to the other side of the uncanny valley. ... in the late '70s, a japanese
roboticist named masahiro mori published what would become a highly
influential insight into the interplay between robotic design and human
psychology. mori's central concept holds that if you plot similarity
to humans on the x-axis against emotional reaction on the y, you'll
find a funny thing happens on the way to the perfectly lifelike android.
predictably, the curve rises steadily, emotional embrace growing as
robots become more human-like. but at a certain point, just shy of true
verisimilitude, the curve plunges down, through the floor of neutrality
and into real revulsion, before rising again to a second peak of acceptance
that corresponds with 100 percent human-like. this chasm -- mori's uncanny
valley -- represents the notion that something that's like a human but
slightly off will make people recoil. here there be monsters. [cynthia]
breazeal, creator of kismet, has, like many of her colleagues, taken
both inspiration and warning from the uncanny valley. ... as hanson's
work progressed, it became ever more clear that making lifelike robot
heads meant more than building a convincing surface and creating realistic
facial expressions. so late last year he began to consider k-bot's brain.
the internet led him to a los angeles company, eyematic, which makes
state-of-the-art computer-vision software that recognizes human faces
and expressions. ... [javier] movellan has asked hanson to build him
a head, and is hoping to give it social skills. he and marian bartlett,
a cognitive scientist who co-directs the ucsd machine perception lab,
have collaborated in the development of software featuring an animated
schoolteacher who helps teach children to read. ... the scientific question,
hanson says, is 'whether people respond more powerfully to a three-dimensional
embodied face versus a computer-generated face.'"
>>> robots, education,
vision, cognitive
science, interfaces
august 30, 2003: mind-expanding
machines - artificial intelligence meets good old-fashioned human
thought. by bruce bower. science news online ( vol. 164, no. 9). "when
kenneth m. ford considers the future of artificial intelligence, he
doesn't envision legions of cunning robots running the world. nor does
he have high hopes for other much-touted ai prospects -- among them,
machines with the mental moxie to ponder their own existence and tiny
computer-linked devices implanted in people's bodies. when ford thinks
of the future of artificial intelligence, two words come to his mind:
cognitive prostheses. ... in short, a cognitive prosthesis is a computational
tool that amplifies or extends a person's thought and perception, much
as eyeglasses are prostheses that improve vision. ... current ihmc projects
include an airplane-cockpit display that shows critical information
in a visually intuitive format rather than on standard gauges; software
that enables people to construct maps of what's known about various
topics, for use in teaching, business, and web site design; and a computer
system that identifies people's daily behavior patterns as they go about
their jobs and simulates ways to organize those practices more effectively.
such efforts, part of a wider discipline called human-centered computing,
attempt to mold computer systems to accommodate how humans behave rather
than build computers to which people have to adapt. ... just as it proved
too difficult for early flight enthusiasts to discover the principles
of aerodynamics by trying to build aircraft modeled on bird wings, ford
argues, it may be too hard to unravel the computational principles of
intelligence by trying to build computers modeled on the processes of
human thought. that's a controversial stand in the artificial intelligence
community."
>>> interfaces, cognitive
science, applications, ai
overview, turing test, education
august 20, 2003:
30-year
robot project pitched - researchers see tech windfalls in costly
humanoid quest. the japan times. "japanese researchers in robot
technology are advocating a grand project, under which the government
would spend 50 billion yen a year over three decades to develop a humanoid
robot with the mental, physical and emotional capacity of a 5-year-old
human. ... unlike cartoonist [osamu] tezuka's 'atom' character, known
as 'astro boy' overseas, based on an image of a 9-year-old boy, the
atom project aims to create a humanoid robot with the physical, intellectual
and emotional capacity of a 5-year-old that would be able to think and
move on its own, the researchers say. ... 'most of today's robots operate
with a program written by humans. in order to develop a robot that can
think and move like a 5-year-old, we have to first understand the mechanism
of how human brains work, [mitsuo] kawato said, admitting the difficulty
of his project. 'that will be equal to understanding human beings.'
but the researchers believe such daunting challenges, once overcome
in the development process, would bring huge benefits in terms of technology
and knowledge."
>>> robots, cognitive
science, ai overview
august 7, 2003: new
uc program explores brain/actions. by roy wood. the cincinnati post
"a new university of cincinnati undergraduate study track is aimed
at helping future researchers understand the link between the brain
and human behavior. the new brain and mind studies track for a bachelor's
degree in interdisciplinary studies could help scientists find cures
for neurological disorders, create artificial intelligence systems,
or simply foster understanding of how we think and act, university officials
say. ... many other universities offer degrees in cognitive science,
which is focused in large part on how computers can replicate the mind
and its functions, [michael] riley said. the uc program adds the neurological
element to the mix, as well as philosophy."
>>> cognitive science, academic
departments (@ resources for students)
july 28, 2003: a
veritable cognitive mind. by r.colin johnson ee times. " marvin
minsky, mit professor and ai's founding father, says today's artificial-intelligence
methods are fine for gluing together two or a few knowledge domains
but still miss the 'big' ai problem. indeed, according to minsky, the
missing element is something so big that we can't see it: common sense.
'to me the problem is how to get common sense into computers,' said
minsky. 'and part of that, it seems to me, is not how to solve any particular
problem but how to quickly think of a new way to solve it-perhaps through
a change in emotional state-when the usual method doesn't work.' in
his forthcoming book, the emotion machine, minsky shares his accumulated
knowledge on how people make use of common sense in the context of discovering
that missing cognitive glue. ... reasoning by analogy is a way of adapting
old knowledge, which almost never perfectly matches the present situation,
by following a recipe of detecting differences and tweaking parameters.
it all happens so quickly that no 'thinking' seems to be involved."
>>> commonsense, analogy,
emotion, reasoning,
representation, cognitive
science, ai overview
july 28, 2003: rat-brained
robot does distant art. by lakshmi sandhana. bbc. "the 'brain'
lives at dr steve potter's lab at georgia's institute of technology,
atlanta, while the 'body' is located at guy ben-ary's lab at the university
of western australia, perth. the two ends communicate with each other
in real-time through the internet. the project represents the team's
effort to create a semi-living entity that learns like the living brains
in people and animals do, adapting and expressing itself through art.
... the computer translates any resulting neural activity into robotic
arm movement. by closing the loop, the researchers hope that the rat
culture will learn something about itself and its environment. 'i would
not classify [the cells] as 'an intelligence', though we hope to find
ways to allow them to learn and become at least a little intelligent.'
said dr potter. ... dr potter hopes the venture will provide valuable
insights into how learning occurs at a cellular level."
>>> neural networks & connectionist
systems, machine learning, cognitive
science, art
june 24, 2003: letting
your computer know how you feel. by cliff saran. computerweekly.
"kate hone, a lecturer in the department of information systems
and computing at brunel university, is the principal investigator in
a project that aims to evaluate the potential for emotion-recognition
technology to improve the quality of human-computer interaction. her
study is part of a larger area of computer science called affective
computing, which examines how computers affect and can influence human
emotion. hone described her research at brunel as a human factor investigation.
she said, 'we are trying to build a system that recognises emotion to
support human-computer recognition.' the project, called eric (emotional
recognition for interaction with computers) has three main goals. ...
'many of the approaches used in speech recognition can be applied to
recognising emotion through facial recognition,' hone said. ... affective
computing can be defined as 'computing that relates to, arises from,
or deliberately influences emotion'. a number of different types of
research are encompassed within this term. for instance, some artificial
intelligence researchers in the field of affective computing are interested
in how emotion contributes to human and, by analogy, computer problem
solving or decision making..."
>>> interfaces, emotion,
speech, cognitive
science
june 19, 2003: spare
parts for the brain. the economist technology quarterly. "for
decades, artificial-intelligence buffs have been trying to create a
synthetic mind, an artificial consciousness. achieving that goal would
answer many interesting philosophical questions about what it means
to be human. that is well into the future. meanwhile, a quiet revolution
has got under way in the world of neuroscience and bioengineering. these
disciplines have made significant progress in understanding how brains
work, starting with top-level functions such as thinking, reasoning,
remembering and seeing, and breaking them down into underlying components.
to do this, researcher have been studying individual regions of the
brain and developing 'brain prostheses' and 'neural interfaces'. the
aim is not to develop an artificial consciousness (although that may
yet prove to be a by-product). instead, the goal is more pragmatic:
to find a cure for such illnesses as parkinson's disease, alzheimer's
disease, tourette's syndrome, epilepsy, paralysis and a host of other
brain-related disorders."
>>> philosophy, cognitive
science, neural networks & connectionist
systems, machine learning
june 9, 2003:
'biomimetics'
researchers inspired by the animal world - animal kingdom inspires
new breed of robots. by scott kirsner. boston globe. "some call
the field 'biomimetics,' for the efforts to mimic biology. darpa calls
it 'biodynotics' -- biologically inspired multifunctional dynamic robots.
.by either name, researchers are finding that even trying to duplicate
the simplest of animals isn't easy. ... but developing the control software
that will enable the robolobster to navigate and avoid obstacles --
never mind looking for mines -- is a tougher problem to crack. 'making
a robot move in the lab is a whole lot different from making it move
in the real world, where there are people and obstacles and other things
that you can't anticipate,' says jordan pollack, a robotics researcher
at brandeis university. one advantage those following a biological example
have, though, is that they can turn to real animals for help. [joseph]
ayers, who is developing the control software for the robolobster, uses
live lobsters as assistants. 'we have a big outdoor pool in nahant,'
he says. 'this summer, one thing we'll do is put the robot in a situation
where it's surrounded by a field of rocks. if it can't get through,
then we'll take a real lobster, and put it in the same situation. we
can see how it solves the problem, then build that into the [robot's
software].'"
>>> robots, nature
of intelligence, military, hazards
& disasters, natural resource management,
cognitive science, applications
may 26, 2003: designing
robots that can reason and react. spacedaily. "in a large room
in georgia tech's college of computing, thomas collins is tweaking the
behavior of a machine. around him stand a gaggle of robots, some with
trash can figures, others resembling miniature all-terrain vehicles.
they appear to be merely functional, plodding pieces of equipment. but
these unlikely contraptions can 'think' in the sense that they can react
to and reason about their environment. collins, a senior research engineer
in the georgia tech research institute's electronic systems laboratory,
likens the 'minds' of these machines to those of clever insects that
have learned to thrive. 'a cockroach is intelligent because it can survive
and do the things it needs to do well. by that definition, these robots
are smart,' he says. ... 'our goal is to create intelligence by combining
reflexive behaviors with cognitive functioning,' explains ronald arkin,
a regents' professor of computer science and director of the lab.'"
>>> robots, nature
of intelligence, hazards & disasters,
military, machine
learning, reinforcement learning,
applications, cognitive
science; also see our related but
is it ai? vignette
may/june 2003: creating
a robot culture - an interview with luc steels. the well-known researcher
shares his views on the turing test, robot evolution, and the quest
to understand intelligence. by tyrus l. manuel. ieee intelligent systems.
"the turing test is not the challenge that ai as a field is trying
to solve. it would be like requiring aircraft designers to try and build
replicas of birds that cannot be distinguished from real birds, instead
of seriously studying aerodynamics or building airplanes that can carry
cargo (and do not flap their wings nor have feathers). ... computers
and robots are used as experimental platforms for investigating issues
about intelligence. researchers who are motivated in this way, and i
am one of them, try to make contributions to biology or the cognitive
sciences. ... ai has had an enormous impact on how we think today about
the brain and the mechanisms underlying cognitive behavior."
>>> cognitive science, natural
language, machine learning, robots,
ai overview, turing
test, interviews & oral histories,
history, nature
of intelligence
may 1, 2003: artificial
intellect really thinking?
by fred reed. the washington times. "can machines think? the question
is tricky. most of us probably remember the defeat of garry kasparov,
the world chess champion, in 1997 by ibm's computer, deep blue. the
tournament was part of the company's deep computing project, which designs
monster computers for business and scientific research. when a calculator
takes a square root, we don't think of it as being intelligent. but
chess is the premier intellectual game. surely it requires intelligence?
... whether machines can be intelligent depends of course on what you
mean by intelligence. most of us recognize it without being able to
define it.
>>> philosophy, nature
of intelligence, chess, turing
test, cognitive science, ai
overview
april 2003: cognitive
systems. ercim news. "the european commission has identified
cognitive systems as one of the priorities for the new generation of
research projects to be developed from 2003 to 2008 (http://www.cordis.lu/
ist/workprogramme/fp6_workprogramme.htm ). the stated objective is to
construct physically instantiated or embodied systems that can perceive,
understand (the semantics of information conveyed through their perceptual
input) and interact with their environment, and evolve in order to achieve
human-like performance in activities requiring context-(situation and
task) specific knowledge. ercim news has chosen to devote a special
issue to this exciting research challenge in order to monitor what is
under development in europe (but not only in europe), and what is the
current status of research and development in this domain." - from
the
introduction
>>> ai overview, cognitive
science, applications, agents,
vision, machine
learning, robots, education
april 28, 2003: cognitive
journey - did you know that one malaysian has made significant studies
on artificial intelligence? by anita matthews. the star. "scientist
yeap wai kiang’s room at auckland university of technology in penrose
is trim and tidy. the lack of clutter belies the zeal and passion the
malaysian-born professor has dedicated in the pursuit of artificial
intelligence a subject that has consumed his entire career. yeap first
discovered the realm of artificial intelligence (ai) as an undergraduate
at the university of essex in england. back in 1975, ai was just emerging
as a new field of study. 'i was fascinated by ai and was lucky as there
was a group of people with a strong interest in the subject which led
me to do research in the area,' recalled the former anglo-chinese school
student from kampar, perak. ... his advice to aspiring scientists is
to ask the right questions. he points out that young researchers are
often fearful of fundamental questions."
>>> careers in ai
(@ resources for students), cognitive
science, natural language, robots
april 14, 2003: carnegie
mellon psychologist helps build a better mine sweeper - in a low-tech
solution, soldiers are taught an expert's technique and detection soars.
by byron spice. post-gazette. "[t]he more significant advance in
demining, a practice that must continue for years after the war ends,
may be the revamped training that u.s. soldiers have received since
last spring. devised by jim staszewski, a cognitive psychologist at
carnegie mellon university, the training program teaches soldiers to
use the thought patterns and techniques honed by an expert, a 30-year
mine-detection veteran. it builds on the work of the late cmu scientists
herbert simon and allan newell, pioneers in the study of the nature
of human expertise. ... what was it that made them experts? how do experts
differ from everybody else? these were the questions being asked by
the army and the same sort of questions that mesmerized cmu's newell
and simon. the cmu researchers were popularly known for creating the
first thinking machine, launching the field of artificial intelligence,
in the mid-1950s. but this quest to make computers think like humans
went hand-in-hand with their efforts to understand how humans think.
as a result, the pair had as much impact on the field of psychology
as on computer science. "herb simon and allan newell pretty much got
cognitive science off the ground," staszewski said. 'i'd like to think
this work is a direct descendant of them' and other cmu psychologists,
including the late william chase. how to be an expert one of the things
they learned is that you don't have to be a genius to be an expert.
in his 1991 book, 'models of my life,' simon wrote: 'experts, human
and computer, do much of their problem solving not by searching selectively,
but simply by recognizing relevant cues in situations similar to those
they have experienced before.'"
>>> cognitive science, reasoning,
expert systems, landmines
(@hazards & disasters), ai
overview, history
march 22, 2003: i,
robot - by baby steps. the latest creation at mit's media lab, a
robot named ripley, can't play chess or guide spacecraft. he's more
like a rather slow-witted infant. by michael valpy. the globe and mail.
"ai's avant-garde reality in 2003 is ripley, rather resembling
the head of an amiable mechanical airedale. he's the creation of 34-year-old
deb roy, founder and director of the cognitive-machines group at mit's
famed media lab, who has been building robots since his winnipeg childhood.
... [w]hat looks to humans to be difficult for robots, like playing
chess, is in fact mindlessly easy. and what looks easy -- because it's
easy for humans to do -- is mind-numblingly complex. like learning language.
ripley is not being programmed with scripted speech. he is being taught
the meanings of words and how to speak, the way a human child would
be. ... ripley learns language by looking at an object, touching it
and hearing the word for it. in the media lab it is called 'grounding.'
... the team is about to teach ripley to understand the idea of point
of view. when the researcher talking to ripley describes a beanbag as
being on his own left, it will be on ripley's right. in effect, mr.
[nick] mavridis says, it will allow ripley to step outside himself and
grasp the notion of 'other.' ... robots, prof. [anne] foerst says, will
never be humans. but they could be somebodies -- individual selves."
>>> robots, cognitive
science, philosophy, natural
language, vision, ai
overview
march 14, 2003: mind
of the company - science is finding that mimicking living systems
to produce robots is about understanding biology, not physics. there
are lessons here for the way we run our corporations. by tim wallace.
financial review boss. "the phrase 'fast, cheap and out of control'
was coined by australian-born scientist rodney brooks and a colleague
for an article published in 1989 advocating the use of robots in space
exploration. internet guru kevin kelly later adapted it for the title
of his 1994 book on new modes of thinking in artificial intelligence,
while filmmaker errol morris used it for his 1997 documentary film featuring
the robotics scientist. ... brook's work on ai challenges us to rethink
oi (organisational intelligence) and to smash the machine, rebuilding
it from the bottom up - fast, cheap and out of control. ... the most
celebrated of all early efforts to create a robot that could do childish
things resulted in shakey, built at the stanford research institute
in the late 1960s to early 1970s, and so named because of the way its
camera and tv transmitter mast shook when it moved. ... the designers
of shakey, and of the projects following it, believed that for a robot
to act intelligently in the world it first needed an accurate model
of that world. ... what must be happening in insects, brooks realised,
was sensing connected to action - sensors to actuators - very quickly.
the key to building a similarly efficient robot, he concluded, was to
have it react to its sensors in the same way, so it did not need a detailed
computational model of the world. 'if the building and maintaining of
the internal world model was hard and a computational drain, then get
rid of the internal model. every other robot had had one. but it was
not clear that insects had them, so why did our robots necessarily need
them?'"
>>> nature of intelligence,
robots, reasoning,
history, cognitive
science
february 28, 2003: it's
a dog's life these scientists are keenly interested in - roll over,
robo-rover. by mark watson. the commercial appeal. "at a symposium
on the dynamics of perception and cognition at the university of memphis,
about 35 people have gathered to study the way biological systems, such
as dogs, perceive, understand and navigate the world. they're doing
so in order to build devices that perform as smartly. ... 'planetary
rovers today can only go a few meters a day - a day! - because they
have to stop and call home and ask, 'what should i do?'' nasa co-sponsored
the event, along with the national science foundation and u of m's institute
for intelligent systems, which will join the fedex technology institute
when it opens in the fall. ... robert kozma, a u of m associate professor
of computer science and chairman of the symposium, said these scientists
discuss techniques 'to model brain behavior, and use the results to
create artificial-intelligent devices.'"
>>> nature of intelligence,
cognitive science, neural
networks, space exploration, autonomous
vehicles
february 24, 2003: tech
for elders must have purpose. by mark baard. wired news. "seniors
will accept newfangled gadgets, as long as they come in familiar packages.
the key, researchers say, is to make assistive technologies easy to
use and familiar. the devices must also increase seniors' independence.
... aging baby boomers might happily adapt to a wireless phone-based
system that helps them navigate public transportation systems using
artificial intelligence, for example. mobility for all, part of the
cognitive levers project, known as clever, at the university
of colorado, will put cognitively impaired people on the right local
bus by combining gps and wireless technology with java-enabled smart
phones that have high-resolution displays. ... researchers admit that
technology can't fix all seniors' problems. people age differently,
and an assistive technology must get smarter as a person's functioning
declines. 'we've got to make systems that are highly customizable,'
said martha pollack, a professor of electrical engineering and computer
science at the university of michigan. ... pollack is programming the
ai brain behind nursebot, a robot that provides both cognitive and motor
support to seniors. nursing-home residents can lean on nursebot as the
machine walks them down long corridors, responds to their questions
and reminds them about appointments."
>>> assistive technologies,
interfaces, cognitive
science, applications, robotic
pets, robots
february 18, 2003: robots
are getting more sociable - researchers work on machines with a
human touch. by alan boyle. msnbc. please note:
accompanying the article is a link to an interactive brief history of
robotics. "for [david] hanson, k-bot is step down a decades-long
path in cognitive science. future robo-faces could be used to test theories
about how humans come up with acceptable responses to social cues. eventually,
the robot itself might recognize when it has flashed an inappropriate
expression or made an ill-timed remark, then adjust its own software
accordingly. there may even be occasions when humans who have a psychological
problem with socializing could learn a thing or two from k-bot’s descendants.
many other robotics experts are working on their own brands of sociable
machines. cynthia breazeal, a professor at the massachusetts institute
of technology, was a pioneer in the field, by virtue of a cute contraption
called kismet. ... now she’s working on a furry, lop-eared robot named
leonardo, which was designed with the aid of experts in animatronics.
'there are many, many, many, many possible applications,' she said.
sociable robots could serve as entertainers, nursemaids, servants or
surrogate friends. the software advances could also lead to better on-screen
'virtual humans' in situations where the physical form isn’t needed
-- say, providing a friendly 'face' at automatic teller machines. ...
looking beyond the science and engineering, the effort to construct
more humanlike robots has a philosophical point as well, the researchers
said. 'robots have always been an intriguing mirror to our own conception
of what it means to be a human,' breazeal said."
>>> robots, history,
cognitive science, assisitive
technologies, applications,
marketing, philosophy
february 6, 2003: niyogi
uses computers to analyze language evolution. by steve koppes. the
university of chicago chronicle (vol. 22 no. 9). "if a computer
could master language as well as a child does, the feat would rank as
one of the greatest technological achievements of our time. but so far,
computers fall far short of the capability. 'how do children learn the
language of their parents with seemingly effortless ease?' asks partha
niyogi, associate professor in computer science, statistics and the
physical science collegiate division. linguists, psychologists and computer
scientists specializing in artificial intelligence would all like to
know how to answer that question. the computational analysis of how
language evolves may well hold the answer, suggests niyogi, who is completing
a book on the topic. that is because children imperfectly learn the
language of their parents. ... niyogi’s ultimate goal is to build computer
systems that can interact with and learn from humans. the first step
is to teach computers how to translate sounds into words."
>>> speech, cognitive
science, natural language, applications
february 3, 2003: daniel
c. dennett -the mind machine - to cognitive scientist daniel c.
dennett, there's nothing artificial about the intelligence of computers.
watch this episode of tech tv's big thinkers series on monday 2/3 at
9:30 p.m., tuesday 2/4 at 12:30 a.m., and wednesday 2/5 at 8 a.m. eastern.
" many philosophers and scientists have pointed out the similarities
between the human brain and the computer, but no one has dedicated more
time to those similarities than this week's big thinker daniel c. dennett
of tufts university. considered a radical by many in the cognitive science
field, we sat down with dennett to find out why he believes that the
mind -- and indeed consciousness itself -- is solely a series of computations."
a video highlight - - daniel c. dennett on artificial intelligence -
is available online.
>>> cognitive science, ai
overview, philosophy, interviews,
show time, resources
january 21, 2003: chess
champion faces off with new computer. by paul hoffman. the new york
times (no fee reg. req'd). " in 1997, garry kasparov, the russian
grandmaster who was then the world champion, played a highly publicized
match, billed "as the last stand of the brain," against the i.b.m. supercomputer
deep blue. ... now almost six years later, mr. kasparov, who is 39,
has found an appropriate silicon stand-in for the i.b.m. machine. on
sunday, he begins a six-game $1 million match against an israeli program,
deep junior, the three-time world computer chess champion. the match
is sponsored by the world chess federation. ... cognitive psychologists
discovered that grandmaster chess was more of a game of pattern recognition
than calculation. but no programmer succeeded in codifying that more
elusive ability into a set of rules that a machine could follow. the
situation today is that both humans and machines can play world-class
chess, but they approach the game completely differently. ... 'the match
will be close, but i'm determined to win,' mr. kasparov said. 'one thing
i know is that humans' days at the top of the chess world are limited.
i give us just a few years.' 'the only sure way to defeat a computer,'
mr. [alexander] baburin said, 'will be to cut its power source.'"
>>> chess, history,
cognitive science, pattern
recognition, reasoning, games
& puzzles, machine learning
december 16, 2002: ngee
ann lecturers find way to make computers think like a human brain.
by ca-mie de souza. channel newsasia. "two lecturers at ngee ann
polytechnic said they had discovered a way to make computers think like
a human brain. ... like a library which arranges its books in categories,
the team said the brain's grey matter functioned in much the same way.
so they designed the 'digital gray matter' technology, which allows
computers to store and classify information. ... dr alexei mikhailov,
lecturer at ngee ann polytechnic, said: 'i believe now we can significantly
improve artificial intelligence tools. they will become cheaper, they
will become more intelligent and it will not just improve the quality
of life, but it could also save our lives.' ... at the moment, artificial
intelligence is already used in robots - in a us$1 billion market that's
growing at 45 percent a year. dr pok yang ming, lecturer at ngee ann
polytechnic, said: 'artificial intelligence has been in place over the
last 20 to 30 years. all these are discovered outside singapore. but
neural cortex or the digital gray matter is discovered in singapore.'"
>>> cognitive science, applications,
machine learning, industry
statistics
december 10, 2002: human
or computer? take this test. by sara robinson. the new york times
(no-fee reg. req'd). "as chief scientist of the internet portal
yahoo, dr. udi manber had a profound problem: how to differentiate human
intelligence from that of a machine. his concern was more than academic.
rogue computer programs masquerading as teenagers were infiltrating
yahoo chat rooms, collecting personal information or posting links to
web sites promoting company products. ... the roots of dr. manber's
philosophical conundrum lay in a paper written 50 years earlier by the
mathematician dr. alan turing, who imagined a game in which a human
interrogator was connected electronically to a human and a computer
in the next room. the interrogator's task was to pose a series of questions
that determined which of the other participants was the human. ... dr.
manuel blum, a professor of computer science at carnegie mellon who
took part in the yahoo conference, realized that the failures of artificial
intelligence might provide exactly the solution yahoo needed. why not
devise a new sort of turing test, he suggested, that would be simple
for humans but would baffle sophisticated computer programs. dr. manber
liked the idea, so with his ph.d. student luis von ahn and others dr.
blum devised a collection of cognitive puzzles based on the challenging
problems of artificial intelligence. the puzzles have the property that
computers can generate and grade the tests even though they cannot pass
them. the researchers decided to call their puzzles captchas, an acronym
for completely automated public turing test to tell computers and humans
apart (on the web at www.captcha.net).
>>> turing test, natural
language, cognitive science
december 2002: a
smarter way to sell ketchup - this is your brain. this is your brain
in the marketing department. any questions? by alissa quart. wired magazine.
"cognitive science, which draws on psychology, neuroscience, sociology,
and computer science, has an illustrious history. the discipline has
brought us innovations in artificial intelligence, cybernetics, and
neural networking. but increasingly, it's about ketchup. cognitive science
isn't just being put to work for better marketing - it's also helping
to make more sophisticated products. there's cog-sci fieldwork behind
cereal ads, and lab experiments support the marketing of jeans. cognitive
scientists are investigating why kids might feel positive about, say,
coke but hate pepsi; or why zoog is a catchy add-on to the disney brand."
>>> cognitive science
november 11, 2002: good
morning, dave... the defense department is working on a self-aware
computer. by kathleen melymuka. computerworld. " any sci-fi buff
knows that when computers become self-aware, they ultimately destroy
their creators. from 2001: a space odyssey to terminator, the message
is clear: the only good self-aware machine is an unplugged one. we may
soon find out whether that's true. the defense advanced research projects
agency (darpa) is accepting research proposals to create the first system
that actually knows what it's doing. the 'cognitive system' darpa envisions
would reason in a variety of ways, learn from experience and adapt to
surprises. it would be aware of its behavior and explain itself. it
would be able to anticipate different scenarios and predict and plan
for novel futures. ... cognitive systems will require a revolutionary
break from current computer evolution, which has been adding complexity
and brittleness as it adds power. 'we want to think fundamental, not
incremental improvements: how can we make a quantum leap ahead?' says
ronald j. brachman, director of darpa's information processing technology
office in arlington, va. brachman will manage the agency's cognitive
system initiative. ... but what about hal 9000 and the other fictional
computers that have run amok? 'in any kind of technology there are risks,'
brachman acknowledges. that's why darpa is reaching out to neurologists,
psychologists - even philosophers - as well as computer scientists.
'we're not stumbling down some blind alley,' he says. 'we're very cognizant
of these issues.'"
>>> cognitive science, scifi,
networks, speech,
machine learning, applications,
robots, philosophy,
military, ai overview, ethical
& social implications
october 15, 2002: robot
'judy' center of futuristic theater piece. by travis cannell. daily
nexus (uc santa barbara). "as computers continue to become faster,
smaller and cheaper, some cognitive scientists wonder if tomorrow's
computers will ever match human intelligence and become self-aware.
breaking away from traditional hard science, the ucsb cognitive science
program staged a theatrical production, entitled 'judy,' which posed
the question: if you build a robot smart enough to do the dishes, would
it also be smart enough to find them boring? ... robert bernstein, a
local santa barbara resident and robotics enthusiast, thought judy's
character presented a plausible vision of artificial intelligence. ...
psychology dept. associate professor mary hegarty was dubious about
the idea of a machine that could think for itself in the near future."
>>> cognitive science, philosophy,
ai overview
september 19, 2002: who's
afraid of the new science? review of "the blank slate: the
modern denial of human nature." the economist. "steven pinker's
provocative new book is full of catchy examples like this that he uses
to highlight two radically different ways of conceptualising and explaining
our behaviour: one with an eye to culture, learning and the social sciences,
the other with an eye to nature, genetic inheritance and experiment.
he makes no bones about where he stands. social science and its popularisers
have, he thinks, systematically ignored or derided recent strides by
neuroscience, artificial intelligence, behavioural genetics and evolutionary
psychology. ... at this point, it would have been neater for a two-camps
approach if hard science, as mr pinker calls it, were united against
the rogues and cretins of cultural relativism in rejecting the blank
slate. but, ever honest, he admits that the blank slate still has defenders
among tough-minded and experimental researchers: in artificial intelligence,
'connectionists' who think brains work like neural networks simulated
on computers 'learning' from statistical patterns with only weak constraints
on their inner structure (the near-blank slate) ..."
>>> cognitive science, neural
networks, machine learning
september 9, 2002: hidden
in nature. the new york times (no-fee reg. req'd). "what if
we could actually harness nature's secrets to create remarkable new
inventions - insect based robots, armies of artificial ants? scientists
are just beginning to reap the benefits of using nature's way to solve
problems. ... studying how animals move can teach how to build better
machines, but studying how animals behave can teach us a whole new way
to think. doctor eric bonabeau is one of the proponents of a new branch
of science called swarm intelligence. a flock of birds, a swarm of bees;
it looks like they're following a complex plan. but research into how
swarms and flocks behave reveals that each ant or bee is actually following
only a few simple rules of behavior, which when multiplied by thousands
achieves astonishing feats. dr. alcherio martinoli and his colleagues
are simulating these behaviors in the lab to try to learn how to make
groups of robots work together, just like ants."
>>> nature of intelligence,
robots, cognitive
science, artificial life, multi-agent
systems
august 16, 2002: does
schmoozing make robots clever? by matthew broersma. cnet. "a
belgian professor doing research for sony wants to teach robots to be
more like people--but he's running into some resistance. ... steels'
work deals with machine intelligence, but it's a fundamentally different
view from that embodied in the famous 'turing test.' according to the
turing theory, a human-like intelligence has successfully been created
when a human can't tell the difference between a conversation with the
artificial intelligence and a real one. 'i think the turing test is
a bad idea because it's completely fake,' steels said. 'it's like saying
you want to make a flying machine, so you produce something that is
indistinguishable from a bird. on the other hand, an airplane achieves
flight but it doesn't need to flap its wings.' similarly steels believes
that machines can evolve intelligence through interaction with one another
and with their ecology -- but this synthetic intelligence it is unlikely
to bear much superficial resemblance to human intelligence. ... this
notion has met with resistance on both theoretical and practical levels.
some scientists, such as rodney brooks of mit, have argued that intelligent
behavior doesn't need internal representations."
>>> nature of intelligence,
turing test, cognitive
science
june 10, 2002: berkeley
minds find computers can't think - yet. by lee gomes. associated
press / published in the wall street journal / available from new jersey
online. "will super-smart machines ever be built? if they are,
will they be conscious? at places like m.i.t., academic careers and
entire departments were built by answering -yes- to those sorts of questions,
starting in the 1950s and 1960s. at berkeley, though, came thundering
dissents, notably from hubert dreyfus and john searle, both from the
university's philosophy department. ... it's in the field of 'cognitive
science,' devoted to the study of the mind, where the berkeley school's
triumph is most apparent. the discipline became popular roughly a generation
ago, when ai was ascendant and when the computer was viewed as an apt
metaphor for the brain. the sloan foundation decided to back cognitive
sciences, and made big grants to two schools. one was m.i.t., a bastion
of ai research. the other was berkeley, where the skeptics held out.
it's getting harder to find anyone in cognitive sciences who still believes
that computers are useful models for the brain. instead, most people
in the field spend their time actually studying brains: scanning 'em,
slicing 'em, dicing 'em. it's essentially the dreyfus-searle research
agenda: to understand the mind, forget about computers and look at the
gray stuff inside our heads."
>>> cognitive science, philosophy,
history
june 8, 2002: thinking
computers must hallucinate, too. by david gelernter. the straits
times. "creating a computer that 'thinks' is one goal of artificial-intelligence
research. ... the single most important fact about thought follows from
an obvious observation: these four styles are connected. we can label
them 'analysi'', 'common sense', 'free association' and 'dreaming'.
but the key point is that they are four points on a single, continuous
spectrum, with analysis at one end and dreaming at the other. psychologists
and computer scientists like to talk about analysis and common sense
as if they were salt and steel, or apples and oranges. we would do better
to think of them as red and yellow, separated not by some sharp boundary,
but by a continuous range of red-oranges and orange-yellows."
>>> philosophy, commonsense,
creativity, emotions,
cognitive science, analogy
june 2002:
sex differences in the brain. by doreen kimura. scientific american
- special issue: the
hidden mind. "any behavioral differences between individuals
or groups must somehow be mediated by the brain. sex differences have
been reported in brain structure and organization, and studies have
been done on the role of sex hormones in influencing human behavior.
but questions remain regarding how hormones act on human brain systems
to produce the sex differences we described, such as in play behavior
or in cognitive patterns."
>>> cognitive science, nature
of intelligence
also see:
sexes
handle emotions differently. bbc (july 23, 2002). "scientists
have come up with a theory to explain why men and women seem to deal
with emotion in different ways. they believe that the sexes use different
networks in their brains to remember emotional events. this may explain
why women are more likely to be emotional and to remember fraught occasions.
researchers from stanford university in california used scan technology
to measure the brain activity of 12 men and 12 women who were shown
a range of images."
sexes
'brains work differently.' bbc (july 8, 2001). "boys and girls
were equally good at both tasks. but they appeared to use different,
though sometimes overlapping, parts of their brains to process the information.
the researchers believe it is possible that boys process faces at a
global level, an ability more associated with the right hemisphere of
the brain. conversely, girls may process faces at a more local level
- an ability associated with the brain's left hemisphere."
may 7, 2002: a
human touch for machines - the radical movement of affective computing is
turning the field of artificial intelligence upside down by adding emotion
to the equation. by charels piller. los angeles times. "for the
last decade, the uc san diego psychologist has traveled a quixotic path
in search of the next evolutionary leap in computer development: training
machines to comprehend the deeply human mystery of what we feel. [javier
] movellan's devices now can identify hundreds of ways faces show joy, anger,
sadness and other emotions. the computers, which operate by recognizing
patterns learned from a multitude of images, eventually will be able to
detect millions of expressions. ... such computers are the beginnings of
a radical movement known as 'affective computing.' the goal is to reshape
the very notion of machine intelligence. ... such devices may never replicate
human emotional experience. but if their developers are correct, even modest
emotional talents would change machines from data-crunching savants into
perceptive actors in human society. at stake are multibillion-dollar markets
for electronic tutors, robots, advisors and even psychotherapy assistants.
... classical ai researchers model the mind through the brute force of infinite
logical calculations. but they falter at humanity's fundamental motivations.
... movellan is part of a growing network of scientists working to disprove
long-held assumptions that computers are, by nature, logical geniuses but
emotional dunces. ... scientists don't foresee machines with hal's emotional
skills--or, fortunately, its malevolence--soon. but they already have debunked
ai orthodoxy considered sacrosanct only five years ago--that logic is the
one path to machine intelligence. it took psychologists and neuroscientists--outside
the computer priesthood--to see inherent limits in the mathematical pursuit
of intelligence that has dominated computer science."
>>> emotion, cognitive
science, machine learning, vision,
history, robots,
ethical & social implications
march 29, 2002: scientists
challenge theory of mind's eye. different parts of brain used to process
real and imagined images. by brad evenson. national post. "most people
use mental imagery to seek answers to these questions, a faculty known
as the mind's eye. some of these images are so precise, experimental psychologists
believed the same brain mechanism that handled visual images also allowed
us to imagine what the world is like. but now u.s. and canadian researchers,
using a scanner to map brain activity as subjects performed cognitive
tasks, have raised doubts about this theory. the study was published this
week in the journal neuron. ... the findings could have applications in
designing object-recognition systems in robots and artificial intelligence
systems. ... consider moving a couch through a doorway. visual recognition
would compare the doorway space with the couch and determine whether it
would fit. using mental rotation, one might spin the couch on its end
and visualize squeezing it through the doorway."
>>> cognitive science,
robots, image understanding
spring 2002: a
body of knowledge. by stephen kiesling. spirituality & health magazine.
"other brains in the body? apparently so. what got me thinking about
this was, once again, the humanoid robots at mit. a big advance in making
robots move like humans was rodney brooks’s development of 'distributed
intelligence' -- small brains spread throughout the robot that concentrate
on particular tasks. without these small brains, the problem of walking
is too complicated for the robot’s central processor."
>>> cognitive science, nature
of intelligence, robots, multi-agent
systems
february 21, 2002: toyland
is tough, even for robots. by barnaby j. feder. the new york times
(no fee reg. req'd). "mr. [mark] tilden has been arguing with little
success for well over a decade that progress in robotics would be much
more rapid if researchers concentrated on designing relatively dumb robots
rather than devices stuffed with increasingly powerful programmable electronic
brains. the trick, in mr. tilden's view, is to equip simple-minded but
physically robust robots with mechanical variations on animal nervous
systems. nervous networks do not organize and process information digitally
as computers do. 'all life is analog,' mr. tilden said."
>>> robots, cognitive
science, toys
there's more !
news indexed by topic:
cognitive science
fyi: as explained in this announcement, on march 1, 2007 aaai changed its name from the american association for artificial intelligence to the association for the advancement of artificial intelligence.
fair use notice
©2000 - 2007 by aaai
Acceuil
suivante
news indexed by topic - cognitive science archive index_filmy Mi-My Thrillville , la suite en images par Jeuxvideo.fr La Passione di Cristo San Francisco Giftcenter and Jewelrymart Årbog 2004 DVDventas.com DVDventas.com DVD pagal žanrą: Romantiniai *trial theme for women. dresses that on fit you , theme for men ... United Cerebral Palsy (E)-THIS Critic's Choices at Kaboodle Crank Cast List - Yahoo! Movies UK NAEA : NAEA's National Tax Practice Institute Fellows What happened on October 21st artforum.com / DIARY Nuovo look di britney? - Yahoo! Answers Why Aren't They Talking About Ho Yeow Sun? Virrey de Mendoza en lajaula.net allmovie ((( Re-Issues > Re-Issues > Feature ))) Foreign Press Review Nan's Blog Anexo:Lista dos personagens de 24 Horas - Wikipédia 2000s DVD Movies Library,Download in DVD, DivX, iPod quality from ... AboutFilm.Com - The Films of 2003 (page 1) The One Hundred Seventy-fifth Commencement Exercises Blackwell Synergy - Dermatol Surg, Volume 25 Issue 6 Page 440-444 ... Edgar Allan Poe’s (Meta)physics: A Pre-History of the Post-Human News - Luglio 2006 Program_tv - Wirtualna Polska Program_tv - Wirtualna Polska The Charley Project: Alphabetical Indexes: A through E MySpace.com - UVE - 85 - Male - Madrid, ES - www.myspace.com/djuve PC 16.2-03 Yang blue page confidential University of Waterloo SENA TE Notice of ... Festival MAGIJA Brown Alumni Magazine - Class Notes - 1980 2007 August « Frontier Former Editor John Ford: A Short Bibliography of Materials in the UC Berkeley ... Charles Leski Auctions - Autographs - Auction 293 Öğretmenleri tehdit ettiler [ Haber Sitesi EnSonHaber.Com ... heroism DVD Movies Library,Download in DVD, DivX, iPod quality ... The Four Word Film Review November 2007 Mirren MAD - Ex Saints: Christopher Wreh Chicago Reader The Reader's Guide to the Chicago International ... Schnauzer page THE SALARIES THREAD :: Players and coaches - Page 3 - BigSoccer Magyar Nemzeti Filmarchívum MVlib - Action movies Das aktuelle Kinoprogramm - Filmdatenbank Bulletin Board The National Free Press - Past Feature Editorial Round 11 Category - dvdqt Living thing TÊTU TITRES EN W bardachreports cantilangnon