Skip to content

Robotics

The Basics:

Since Leonardo da Vinci’s first sketches of a mechanical man, robotshave captivated humankind. Today, in many domains, robots are meretools. In others, researchers strive to build robots that emulateand perhaps surpass the physical and mental deftness of theircreators.In the simplest sense,a robot must engage with information from its surroundings and dosomething physical with this information. The precision andthoroughness with which robots engage with their environment and thelevel of creativity and functionality in their analysis of thisenvironment are some of the parameters weighed by pioneering roboticsresearchers. In1959 robots entered the field of manufacturing and illustrated the vast potential for specialized, consistent, and accurate machines that could do work for humans in hazardous or inhumane worke nvironments. As computer processors get cheaper and faster, top researchers are renewing their interest in robotics and developing ambitious goals for robotics, moving the field into the next generation—where robots are beginning to possess levels of sociability and versatility. These robots, like Domo1 and Kismet2 from MIT, incorporate theories from social development psychology,ethology, and evolution. The field on the wholeis looking ahead to an environment where robots may become capable ofassisting in the more intimate, fundamental aspects of life. This research, spearheaded at MIT, sees robotics together with AI, asdeveloping versatile capabilities and ultimately the ability to learnand improve.

 

The Procedures:

While traditionalautonomous robots, such as specialized robotic arms working onfactory lines, are designed to operate independent from humans, thenext generation of sociable humanoid robots is being designed tointeract, cooperate and learn from people.This trend coincideswith current trends in AI research, and in fact it is difficult tofully tease the two domains apart. At IBM for example, the follow-upproject to Deep BlueJoshua Blue,3 pursues advanced AI robotics by studying how children developdexterity and intelligence from their physical and socialenvironment.This ‘learner’paradigm plays an important role at MIT as well. Their fist majorsuccess with a sociable robot, Kismet,4 was designed with an altricial system, similar to a young child.Kismet included visual and auditory sensors, a processing system thatmodeled attention, behavior, perception, motivation, and emotionsystems with evolution and developmental psychology-inspiredalgorithms and heuristics, and an output system that producedappropriate vocalizations, head and eye orientations, and facialexpressions. “Domo,” 5 MIT’s follow-up to Kismet, applies these same principles to arobot that can locate a human by sound and sight, grasp an offereditem, and place it on a shelf requested by a human http://www.youtube.com/watch?v=Ke8VrmUbHY8&NR=1.Eventhough researchers all over the world have made enormous stridesmodeling human tasks, the simplest actions, like walking, still provedifficult for robotics researchers. At the Honda labs researchershave seen success with their robot ASIMO. But even while ASIMOascends stairs using an onboard computer that works to neutralizestability-jeopardizing forces, it can’t mimic the fluidity withwhich a child moves http://www.youtube.com/watch?v=Q3C5sc8b3xM.Probably the leader of the pack in terms of agility is thefour-legged Big Dog by Boston Dynamics http://gizmodo.com/368651/new-video-of-bigdog-quadruped-robot-is-so-stunning-its-spooky.

 

Relationship to Terasem:

As long as roboticsresearch employs diverse scientific disciplines, fruitful theoriesand implications for sociable robots will have applications forCyberconsciousness. Computer systems that emerge to enable a robotto sense stimuli from its environment, interpret the data, andoperate in various novel ways, can be almost directly applied toCyberconsciousness. Clearly these systems will rely heavily on AIand neuroscience research, and the three disciplines will likelyadvance and cross-pollinate in concert. Furthermore,Nanobots, the smallest of all robots, pose great possibilities forthe exploration of the human body and mind6.Nanobots will operate with the same central tenets of all robots (tosense, interpret, and act) but will aim to model biological machinery(such as cilia and flagella) to do their work on the molecular scale http://bionano.rutgers.edu/or.html. These nanorobots,because of their size, will likely have the ability to enter humanarteries to repair damage and interact with nerve cells to gatherinformation about the still mysterious nature of consciousness. Intheir ability to gather previously inaccessible information,nanorobotics may provide breakthroughs for sociable robotics,Cyberconsciousness, and nearly every other domain of advancedscience.

 

Sources and further reading:

http://www.shadowrobot.com/http://www.ai.mit.edu/projects/humanoid-robotics-group/kismet/kismet.htmlhttp://web.mit.edu/newsoffice/2007/domo.htmlhttp://asimo.honda.com/default.aspxhttp://www.thetech.org/exhibits_events/online/robotics/universal/http://www.ri.cmu.edu/http://www.nanobot.info/http://bionano.rutgers.edu/or.htmlhttp://www.youtube.com/watch?v=Ke8VrmUbHY8&NR=1http://www.youtube.com/watch?v=Q3C5sc8b3xM

 

Artificial Intelligence

 

The Basics:

Alan Turing, thebrilliant mathematician who is famous for helping to crack the ciphermachine used by the Germans in WWII, put forth the possibility ofartificial intelligence in 1950, in his seminal paper “ComputingMachinery and Intelligence.”1 Six years later, at a Dartmouth College conference spearheaded by thecomputer scientist John McCarthy, expert mathematicians came togetherto officially launch the discipline of Artificial Intelligence.Artificial Intelligence(AI) is the engineering of machines and computer programs thatpossess intelligence. However, a definition of intelligence proveselusive. John McCarthy calls it the “computational art ofachieving your goals.”2The “art” portion of this statement, more than 50years after the AI Dartmouth conference, has the AI community stillchallenged.Artificial Intelligencetoday should be broken into two realms: Narrow AI and ArtificialGeneral Intelligence (AGI). Narrow AI systems are confined tospecific domains of intelligence. IBM’s Deep Blue computer forexample, can process 600 million chess moves per second. By usingcomputer algorithms that enable intelligence development in regard topositions and situations, Deep Blue was able, in 1997, to defeat thetop-ranked chess player in the world, GaryKarsparov.3Deep Blue’s chessintelligence is astounding, but it is unable to apply thisintelligence in another domain, such as global climate changeprediction. Kasparov’s famous quip after his loss, “Atleast it couldn’t enjoy its victory,” expressed furtherlimitations of narrow AI.General AI research islargely pursued by universities and not-for-profit institutionsbecause the task seems currently much more ambitious and much lessmarketable than narrow AI systems. The field seeks to developsoftware and robotic systems with intelligence that can be applied toa variety of environments to solve a wide array of complex problems.Like the exploration of space, the challenging task of general AIresearch draws on a variety of subjects, attracts some of the mostbrilliant scientific minds, and produces technologies and theoriesthat benefit many other scientific pursuits.

 

The Procedures

The quest to achieve AGI, a young, immensely complex undertaking, is fractured into subgroups, each using different paradigms in their research. The main philosophical divide exists among the “neats” and the “scruffies.”4In the world of AI the neats are composed of researchers thatfavor a clear and organized approach. They favor models of designthat can be neatly proven correct. On the other hand, scruffiescontend that intelligence is intractably connected with theirrational mystery of consciousness, and is too fluid and complicatedto approach with the clear and rational tools of logic and appliedstatistics. The divide represents different perspectives on thenature of the human brain. Are we fundamentally rational orirrational? The disagreement also highlights the question of whetheror not researchers should pursue AI with the powerful, but extremelymessy human brain as a model. Presumably, other models ofintelligence, different from the human model could arise and proveimmensely effective.The prominent, neatJohn McCarthy has famously said that he cares about generalartificial intelligence, but not necessary via the imitation ofhumanoid intelligence. On the other hand, the prominent AIresearcher and inventor of the palm pilot, JeffHawkins, believes that the key to developing trulyintelligence machines lies in the complex neo-cortex of the human brain. His company, Numenta,develops computer memory systems modeled on the human neo-cortex.Both schools of AIresearch find it necessary to draw on many resources to tackle theirimmense challenge. Search algorithms, logic systems, probabilitytheory, economic theories, evolutionary computation, and neuralnetwork theories all come into play. Furthermore, new ideas, evenradically different paradigms must be considered to continue thegeneral AI pursuit. At the 50th year anniversary meetingof 1956 Dartmouth conference the organizers affirmed that themetaphor of the brain as a computer must be discarded. The panelacknowledged considerable advances in narrow AI in the last 50 yearsbut stated that, “what has been missed is – we believe –how important embodiment and the interaction with the world are asthe basis for thinking. Quite recently it has become evident thatmany fields (linguistics, cognitive sciences, neuroscience,morphogenesis, artificial intelligence, robotics, and materialsciences) are highly relevant in order to advance the state of theart. It is our conviction that breakthroughs can only be achieved bya strong cross-fertilization between these fields.”IBM seems to agree withthis advice. While working on their latest AI endeavor, Joshua Blue,researchers at IBM have made a point of seeking out experts on theneat and scruffy side in order to launch a project that seeks tomodel intelligence after a the brain of a young child.5Joshua Blue incorporates natural language understanding,common sense reasoning, and to some extent, emotional intelligencecapabilities. The software design company Novamente,applies a similar perspective as they develop computer programs thatcan reflect on their past actions and learn from those experiences.The hope, as the Dartmouth panel hints, is that the key to unlockingAGI may be in studying and modeling how intelligence develops.

 

Relationship to Terasem:

An AI system’srecognition of its own intelligence and an ability to improve itself– a sort of consciousness, will undoubtedly prove a necessarydevelopment as researchers pursue the most advanced AGI systems.Terasem’s mission of cyberconsciousness 6 plays an important role in the possibility of an AGI system becausecyberconsciouness, although currently still theoretical, combines themassive parallelism, evolutionary history, and versatility of thehuman brain with the increasing speed, breadth, and longevity of theInternet. This powerful union could prove the ultimate form of AI,capable of solving extremely complex problems with diverse subjectmatter.Undoubtedly, as theInternet becomes increasingly part of our lives, it will becomeincreasingly incorporated in the AGI paradigms. Already we areseeing search engines and Internet advertisements develop a sort ofnarrow intelligence. With the development of web 3.0 (SemanticWeb) the enormous content of the Internet may becomereadable and understood by computer programs, expanding the breadthof their intelligence. This development will allow websites such asTerasem’s lifenaut.com access to the vast stores of informationon the web, exponentially increasing the data available to the site’s“Mindware” and increasing the chances for researchers touncover the parameters and qualities of emergent artificialintelligence.“AI’s in the future will be able to recreate people from theinformation left behind about them if suitable backups of their brain were notmade (in which case it would be straightforward).  Neural nanobots wouldobtain all the available information about them from other people’s brains.The AI would also consider all of the person’s writings, pictures, movies, etc.Also their genetic code.  And it could then create a person who wouldpass a Turing test for that person with their best friends as the judges.For that reason it is worthwhile keeping your own files — letters, emails,photos, writings, etc.Is this recreated person the same person?  It is an interestingquestion, but we could also ask today are we the same person as we were, say, ayear ago.  The recreated person by the AI is probably at least asclose as we are to ourselves after some time passage.”—Ray KurzweilInventorof the All-Font Scanner, Talking Book for the Blind & KurzweilPianoCreatorof AI Music Composers, AI Poets and AI ArtistsRecipientof National Medal of Technology

 

References and Further Reading:

http://www.aboutai.net/DesktopDefault.aspxhttp://www.compapp.dcu.ie/~humphrys/ai.links.htmlhttp://www-formal.stanford.edu/jmc/whatisai/http://www.a-i.com/http://www.aaai.org/home.htmlhttp://www.csail.mit.edu/index.phphttp://www.nytimes.com/2007/12/02/books/review/Henig-t.html?_r=1&bl&ex=1197003600&en=1c347f0cdd5b6e57&ei=5087%0A&oref=sloginhttp://www.npr.org/templates/story/story.php?storyId=16816185&ft=1&f=1007http://www.isi.imi.i.u-tokyo.ac.jp/~maxl/Events/ASAI50MV/

 


Pygmalion and GalateaDeep Blue ?


Multimedia Spider Deciphering Research

 

The Basics:

Web crawlers are thediligent workers of the World Wide Web. These methodical crawlerprograms comb the web, utilizing its extensive linkages to provideuseful up-to-date lists of relevant web sites in response to humanqueries.As the web evolves sodo its web crawlers. The new web, (called Web 3.0 or the SemanticWeb 1)will require web crawlers to be capable of interpreting content embedded in diverse sources of text, photos, audio, and video. These‘multimedia deciphering spiders’ are the next generationof web crawlers. Traditionally, when webusers search for information, web crawlers follow specific algorithmsto seek out useful sites based on key words and link popularity. Theweb users then must search through the results to find exactly thepiece of information they seek. Multimedia spiders could take overthis task for humans. With advancements in AI research andstandardizations of the web landscape, these spiders will drasticallycut down information acquisition time and set the stage for a trueharnessing of the power of the Internet. This isn’t a newidea. Tim Berners-Lee proposed it as early as 1994 at the firstWorld Wide Web conference2.Today, despite 50 years of AI research, the semantic web and itsspiders remain largely unrealized.

 

The Procedures:

Multimedia spiderdeciphering research necessarily focuses on both the functionalspiders and the semantic web that serves as their terrain. Techstart-ups (Radar Networks3),established titans (Google4 and Yahoo5),and public sector organizations (World Wide Web Consortium6)have all taken up this challenge7,8.Some organizations put emphasis on advancing the capabilities ofmultimedia spiders, developing their ability to interpret diverseforms of data with concepts borrowed from AI. Others take up thetask of standardizing the language of cyberspace, making the spider’sjob of extracting meaning easier.The quest for thisstandardization proves enthusiastic in the life sciences whereresearch demands the integration of heterogeneous data setsoriginating form separate subfields. These scientific communities,through the adoption of ontologies, are developing language standardsfor scientific web databases. Ultimately a researcher will be ableto ask a specific scientific question, and multimedia spiders,through interpretation of the standardized language across all therelevant scientific web pages, will yield, not a website, but aspecific answer from data culled from many websites and many forms ofmedia.The web agents of theweb necessarily possess a strategy and architecture. They start withlists of primary URL’s to visit according to a certain topicand as they “crawl” through these sites they identifylinks to related sites, recursively adding them to their list ofsites to be visited until they finally emerge with a list of the mostpromising sites (in the case of web crawlers) or a specific answer(in the case of multimedia deciphering spiders). The programmedstrategy and architecture of web crawlers dictates how deeply theypursue peripheral sites and how quickly they pursue updated sites.The finest programmed crawlers emerge with a balance of volume,quality and freshness9.

 

Relationship to Terasem:

The pursuit ofcyberconsciousness through AI advancement, advanced personalitycapture, or nanotechnology, will develop speed and scope as itincorporates multimedia spiders navigating the semantic web. Ratherthan humans searching through websites to extract relevant data,spiders could perform this task at a tremendous pace. Forcyberconsciousness websites like lifenaut.com, spiders could comb thevast media offerings of the web to accumulate and connect sourcesrelevant to a certain personality. The multimedia spidering softwarecould also be used within mindfiles to search video itself foremotions, events, memories and link them with related text documentsand chatbot conversations. The possibilities are infinite.Like many of thescience topics behind Terasem, multimedia spider deciphering researchshould not be viewed in isolation. The true potential of thesespiders will undoubtedly be realized only in concert with AIadvancements. Spiders combing the semantic web will provide seemingintelligent answers to questions, but the integration of theseanswers, to solve the most complex problems, will only be realized asthe spiders achieve emergent AI properties. The potent combinationof the semantic web and web spiders with some form of AI will enableexceptional fidelity of the personality capture available atlifenaut.com, and with time, advanced cyberconsciousness.

 

Sources and further reading:

http://money.cnn.com/magazines/business2/business2_archive/2007/07/01/100117068/index.htm?postversion=2007070305http://www.nytimes.com/2006/11/12/business/12web.html?_r=1&oref=sloginhttp://eprints.ecs.soton.ac.uk/12614/1/Semantic_Web_Revisted.pdfhttp://www.news.com/8301-10784_3-9824586-7.html?tag=nefd.topwww.obofoundry.org