+ All documents
Home > Documents > An intelligent framework to manage robotic autonomous agents

An intelligent framework to manage robotic autonomous agents

Date post: 27-Nov-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
10
An intelligent framework to manage robotic autonomous agents Renato Vidoni a,1 , Francisco García-Sánchez b,, Alessandro Gasparetto a,1 , Rodrigo Martínez-Béjar b a DIEGM, Department of Electrical, Management and Mechanical Engineering, University of Udine Via delle Scienze, 208 - 33100 Udine, Italy b Facultad de Informática, Campus de Espinardo, Universidad de Murcia - 30071, Espinardo (Murcia), Spain article info Keywords: Robots Autonomous agents Artificial Intelligence Web services Ontologies abstract In this paper a joint application of Artificial Intelligence (AI), robotics and Web services is described. The aim of the work presented here was to create a new integrated framework that keeps advantage on one side of the sensing and exploring capabilities of the robotic systems that work in the real world and, on the other side, of the information available via Web. Robots are conceived like (semi-)autonomous sys- tems able to explore and manipulate a portion of their environment in order to find and collect informa- tion and data. On the other hand, the Web, that in a robotic domain is usually considered like a channel of communication (e.g. tele-operation, tele-manipulation), here is conceived also like a source of knowl- edge. This allows to define a new framework able to manage robotic agents in order to get precise, real-time information from the real world. Besides, software agents may search for and get additional information from the Web logical world. The intelligent administration of these services can be applied in different environments and leads to optimize procedures and solve practical problems. To this end a traffic control application has been defined and a simplified test-case implemented. Ó 2010 Elsevier Ltd. All rights reserved. 1. Introduction In recent decades, the diffusion of all Artificial Intelligence, Robotics and Web applications has allowed to find new shared applications with the purpose of realizing systems able to solve problems or improve real systems efficiency. Moreover, research on autonomous robots and autonomous systems is an open issue and the availability of new, easy to share information sources such as the Web let us conceive the Web as a source of data that can be managed by means of a remote controller or supervisor agent. In fact, the use of the Web as a source of communication with a re- mote controller or supervisor is well studied and applied in differ- ent fields in order to develop robust and effective systems. In the majority of applications, the Web connection is used to facilitate systems remote control (i.e. through human commands), as it is the case in tele-operation and tele-manipulation (Farajmandi, Gu, Meng, Liu, & Chen, 2003; Schulz, Burgard, Fox, Thrun, & Cremers, 2000; Skrzypczynski, 2001; Tomizawa, Ohya, & Yuta, 2002, 2003, to offer a channel of communication to the robots of a swarm (Tso, Zhang, & Jia, 2007), and to interface the human with the ro- botic domain through the Web (Obraczka et al., 2007). So far, at least two research lines has not been fully explored: the use of the Internet as a data source and the use of robots as bro- kers of knowledge for a high level autonomous system able to pro- cess users queries and manage the information discoverers. It means that the Internet can be used for both sharing information between different agents and getting information by looking at databases and Web sites. It also leads to think in robots as agents able to find elements and sense the environment in order to com- plete the knowledge map of the application considered. By looking to the real world, a lot of possible problems can be faced and solved exploiting the information that can be recovered from both sources the Web and physical robotics systems. In this sense, robots can behave like sentinels or scouts into the environ- ment, with a task and a proper behavior (e.g. avoid obstacles, enti- ties/objects recognition, etc.), and can work and cooperate like slave systems leaving the leadership to master autonomous agents. In such a case, a centralized approach can be implemented, the ‘‘brain’’ being constituted by some master agents, while robots and the Internet services would act as laborers. Once the task is defined, both of these systems can cooperate and be used in order to find the best strategy (e.g. send the robots on the supposed shortest or safest path with the available information). The whole system can be also employed to verify the possibility to follow a particular strategy by sending the physical agents and, in case of failure, collect the new information coming from the robots’ sensors in order to redefine/refine the (optimal) strategy. Hence, this kind of joint action can be used in order to optimize a lot of different tasks and different applications where the information available is not static but can change and both the database system and the strategy to follow have to be updated. Moreover, the task of these 0957-4174/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2010.12.080 Corresponding author. Tel.: +34 868 884634; fax: +34 868 884151. E-mail addresses: [email protected] (R. Vidoni), [email protected] (F. García- Sánchez), [email protected] (A. Gasparetto), [email protected] (R. Martínez-Béjar). 1 Tel.: +39 0432 558257/8041; fax: +39 0432 558251. Expert Systems with Applications 38 (2011) 7430–7439 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa
Transcript

Expert Systems with Applications 38 (2011) 7430–7439

Contents lists available at ScienceDirect

Expert Systems with Applications

journal homepage: www.elsevier .com/locate /eswa

An intelligent framework to manage robotic autonomous agents

Renato Vidoni a,1, Francisco García-Sánchez b,⇑, Alessandro Gasparetto a,1, Rodrigo Martínez-Béjar b

a DIEGM, Department of Electrical, Management and Mechanical Engineering, University of Udine Via delle Scienze, 208 - 33100 Udine, Italyb Facultad de Informática, Campus de Espinardo, Universidad de Murcia - 30071, Espinardo (Murcia), Spain

a r t i c l e i n f o a b s t r a c t

Keywords:RobotsAutonomous agentsArtificial IntelligenceWeb servicesOntologies

0957-4174/$ - see front matter � 2010 Elsevier Ltd. Adoi:10.1016/j.eswa.2010.12.080

⇑ Corresponding author. Tel.: +34 868 884634; fax:E-mail addresses: [email protected] (R. Vidon

Sánchez), [email protected] (A. Gasparetto), rodrigo1 Tel.: +39 0432 558257/8041; fax: +39 0432 55825

In this paper a joint application of Artificial Intelligence (AI), robotics and Web services is described. Theaim of the work presented here was to create a new integrated framework that keeps advantage on oneside of the sensing and exploring capabilities of the robotic systems that work in the real world and, onthe other side, of the information available via Web. Robots are conceived like (semi-)autonomous sys-tems able to explore and manipulate a portion of their environment in order to find and collect informa-tion and data. On the other hand, the Web, that in a robotic domain is usually considered like a channel ofcommunication (e.g. tele-operation, tele-manipulation), here is conceived also like a source of knowl-edge. This allows to define a new framework able to manage robotic agents in order to get precise,real-time information from the real world. Besides, software agents may search for and get additionalinformation from the Web logical world. The intelligent administration of these services can be appliedin different environments and leads to optimize procedures and solve practical problems. To this end atraffic control application has been defined and a simplified test-case implemented.

� 2010 Elsevier Ltd. All rights reserved.

1. Introduction

In recent decades, the diffusion of all Artificial Intelligence,Robotics and Web applications has allowed to find new sharedapplications with the purpose of realizing systems able to solveproblems or improve real systems efficiency. Moreover, researchon autonomous robots and autonomous systems is an open issueand the availability of new, easy to share information sources suchas the Web let us conceive the Web as a source of data that can bemanaged by means of a remote controller or supervisor agent. Infact, the use of the Web as a source of communication with a re-mote controller or supervisor is well studied and applied in differ-ent fields in order to develop robust and effective systems. In themajority of applications, the Web connection is used to facilitatesystems remote control (i.e. through human commands), as it isthe case in tele-operation and tele-manipulation (Farajmandi, Gu,Meng, Liu, & Chen, 2003; Schulz, Burgard, Fox, Thrun, & Cremers,2000; Skrzypczynski, 2001; Tomizawa, Ohya, & Yuta, 2002, 2003,to offer a channel of communication to the robots of a swarm(Tso, Zhang, & Jia, 2007), and to interface the human with the ro-botic domain through the Web (Obraczka et al., 2007).

So far, at least two research lines has not been fully explored:the use of the Internet as a data source and the use of robots as bro-

ll rights reserved.

+34 868 884151.i), [email protected] (F. Garcí[email protected] (R. Martínez-Béjar).

1.

kers of knowledge for a high level autonomous system able to pro-cess users queries and manage the information discoverers. Itmeans that the Internet can be used for both sharing informationbetween different agents and getting information by looking atdatabases and Web sites. It also leads to think in robots as agentsable to find elements and sense the environment in order to com-plete the knowledge map of the application considered.

By looking to the real world, a lot of possible problems can befaced and solved exploiting the information that can be recoveredfrom both sources the Web and physical robotics systems. In thissense, robots can behave like sentinels or scouts into the environ-ment, with a task and a proper behavior (e.g. avoid obstacles, enti-ties/objects recognition, etc.), and can work and cooperate likeslave systems leaving the leadership to master autonomous agents.In such a case, a centralized approach can be implemented, the‘‘brain’’ being constituted by some master agents, while robotsand the Internet services would act as laborers.

Once the task is defined, both of these systems can cooperate andbe used in order to find the best strategy (e.g. send the robots on thesupposed shortest or safest path with the available information).The whole system can be also employed to verify the possibility tofollow a particular strategy by sending the physical agents and, incase of failure, collect the new information coming from the robots’sensors in order to redefine/refine the (optimal) strategy. Hence, thiskind of joint action can be used in order to optimize a lot of differenttasks and different applications where the information available isnot static but can change and both the database system and thestrategy to follow have to be updated. Moreover, the task of these

R. Vidoni et al. / Expert Systems with Applications 38 (2011) 7430–7439 7431

applications can be autonomously reached by the intelligent systemmade of autonomous software and robotic agents.

Thus, the research on Human–Agent–Robot interaction focuseson cognitive, physical and social interaction between agents, ro-bots (Kaminka, 2004, 2007) and people to provide for collaborativeintelligence and extend human capabilities. Robots (and/or agents)can be useful in different contexts and environments, such as indisaster recovery, search and rescue tasks, delivering health care,assisting the elderly and increasing productivity in the workplace.

In literature, different approaches and applications for develop-ing multi-agent teamwork and creating cooperation between hu-man, agents and robots, can be found (see Freedy et al., 2004,2008; Innocenti, Lòpez, & Salvi, 2009; Johnson, Feltovich, Brad-shaw, & Bunch, 2008a, 2008b; Pynadath & Tambe, 2003; Scerriet al., 2003). Successful applications have been developed by inte-grating and extending KAoS (Knowledgeable Agent-oriented Sys-tem), a collection of componentized agent services compatiblewith several popular agent frameworks (Bradshaw, Dutfield, Be-noit, & Woolley, 1997, 1999, 2001), in order to create a single envi-ronment for human-agent work systems (e.g. with Brahms andNomads as in Bradshaw et al. (2003), Clancey, Sierhuis, Kaskiris,& van Hoof (2003), Sierhuis (2001), Sierhuis et al. (2003), Suriet al. (2000). A more recent evolution of this network infrastruc-ture is presented in Johnson et al. (2008a, 2008b) where the KAoSHART (Human–Agent–Robot Teamwork) has been adapted for pro-viding dynamic regulation between agents, robots, traditionalcomputing platforms and Grid computing. Grid computing is theapplication of resources from many networked computers to a sin-gle problem at the same time. Its evolution led to the so-called so-cio-cognitive grid concept, and applications in the human-agentdomain can be found in Bruijn & Stathis (2003), Pitt & Artikis(2003), Ryutov (2007). The idea behind the socio-cognitive gridsis to provide cognitive and social resources accessible on electronicdevices in support of common activities. This leads to transformthe Net into a human resource, ideally accessible by anyone at any-time and anywhere for solving a particular problem.

Taking into account these human-Artificial Intelligence interac-tions, in our attempt an alternative integration between humans,software and physical agents is studied and realized. Thus, our goalwith the new framework system is to further facilitate the three-party communication between humans, the logical world and thephysical world. Ontologies (Studer, Benjamins, & Fensel, 1998)have been chosen for knowledge representation, the agent technol-ogy for facing the dynamism of the logical and physical worlds, andthe intelligent agents’ properties in order to enable the frameworkto deal with the ever-changing environment.

The paper is organized as follows. In Section 2 the backgroundtechnologies and systems are presented and discussed. The pro-posed new framework is presented in detail from a general pointof view in Section 3. In Section 4, the application of the frameworkto a traffic control application is described and a real simplifiedtest-case is presented. Concluding remarks and future work direc-tions are put forward in Section 5.

2. Background technologies

2.1. Multi-Agent systems

An agent is a computer system situated in some environment andcapable of autonomous action in this environment in order to meetits design objectives (Wooldridge, 2002). According to Wooldridge,an agent has to fulfill some properties in order to become intelligent:reactivity (the ability to perceive its environment and respond tochanges in a timely fashion), pro-activeness (the ability to exhibitgoal-directed behavior by taking the initiative), and social ability

(the ability to interact with other agents) (Wooldridge, 2002). Intel-ligent agents (IA) can present some other properties such as tempo-ral continuity (an agent operates continuously and unceasingly),reasoning (decision-making mechanism, by which an agent decidesto act on the basis of the information it receives, and in accordancewith its own objectives to achieve its goals), rationality (an agent’smental property that attract it to maximize its achievement and totry to achieve its goals successfully), veracity (mental property thatprevents an agent from knowingly communicating false informa-tion), mobility (i.e. the ability for a software agent to migrate fromone machine to another).

Agents can be useful as stand-alone entities that are in charge ofparticular tasks on behalf of a user. However, in the majority ofcases agents exist in environments that contain other agents, soconstituting Multi-Agent Systems (MAS). A MAS consists of a groupof agents that can potentially interact with each other (Vlassis,2007). By exploiting this feature several advantages such as reli-ability and robustness, modularity and scalability, adaptivity, con-currency and parallelism, and dynamism (Elamy, 2005) can bereached.

In order to operate in complex, dynamic and unpredictableenvironments (e.g., air traffic control, autonomous-spacecraft con-trol, health care, industrial-systems control) that involve a high de-gree of complexity, an agent-oriented programming paradigm isdesirable. The Agent-Oriented Software Engineering (AOSE)(Wooldridge & Ciancarini, 2000) methodologies aim to satisfy suchrequirement by creating methodologies and tools that enable inex-pensive development and maintenance of agent-based software. Inthe present work the INGENIAS methodology (Pavón, Gómez-Sanz,& Fuentes, 2005), which was proposed to facilitate the MAS devel-opment process, has been considered due to its completeness andtool support.

With the purpose of standardizing agent technologies for theinteroperation of heterogeneous software agents, ‘‘the Foundationfor Intelligent Physical Agents’’ (FIPA) has become an IEEE Com-puter Society standards organization and has developed specifica-tions for permitting the creation of a set of shared rules. Thanks tothe efforts on standardization, FIPA-OS (FIPA-Open Source), JADE(Java Agent Development Environment) and ZEUS agent platformscompliant to the FIPA rules and directives have been created. TheJADE agent platform (Bellifemine, Caire, Poggi, & Rimassa, 2008)has been chosen in our work in order to implement agents and de-ploy a multi-agent environment.

2.2. Autonomous robots

Taking into account different possible application fields, a robotcan be considered as an electro-mechanical, programmable, self-controlled device capable of performing a variety of tasks on com-mand or according to instructions and intelligent algorithmsimplemented in advance. Robots can be classified into two maincategories according to the task to be carried out by them: indus-trial robots and service robots.

Industrial robots are created in order to perform repetitive,strenuous, accurate and precise tasks, allow to perform high-qual-ity and cost-effective actions and manufacturing tasks. On theother hand, Service Robots are aimed to provide services to hu-mans instead of ‘‘manufacturing’’ purposes. These robotics systemscan be found in an ample repertory of daily life domains, includingdomestic and leisure environments, health and professional ser-vices, and hazardous environments. Applications such as robotsfor mowing the lawn, cleaning floors or buildings, entertainingand playing are already commercially available (e.g. Roomba, AIBO,ASIMO Sacagami et al., 2002), while robots as companions, fully so-cial interactive or servants are at a development phase (Dau-tenhahn et al., 2005; Pineau, 2003).

Fig. 1. The framework layout.

7432 R. Vidoni et al. / Expert Systems with Applications 38 (2011) 7430–7439

The exploration and recognition of dangerous or non human-like areas by means of robots is an argument of interest for manyresearch teams of diverse areas (Bertrand, Bruckner, & van Winn-endael, 2001; Hollingum, 1999). In the case of industrial and ser-vice systems, the real world and application environment can beviewed, at different levels, as an information source. Indeed, robotscan be directly controlled or programmed in order to realize repet-itive actions, tele-operated or provided with autonomous andintelligent capabilities. In all cases, robots can be used to get infor-mation from their environment by means of sensors (so that thosecan touch, see, hear, measure, etc.). Moreover, they can manipulateand modify their environment by means of actuators, for both dis-covering new information and performing different tasks.

In this work, we are interested in exploiting all actions, informa-tion and services that a robot can provide upon request. Thus, theframework and the robots we will use must feature a certain de-gree of autonomy. Autonomy refers to systems capable of operat-ing in the real-world environment without any form of externalcontrol for extended periods of time. Hence, autonomous robotscan be viewed as physical agents that are able to perceive and ex-tract information from their environment and use this knowledgeto move and act in a purposeful manner.

For intelligent autonomous robots, the three most importantdesirable features are perception, localization and navigation. Inconventional robots, the tasks associated to each of these featuresare developed independently and each one uses its own algorithmsand data structures. Depending on the working environment,which can be more or less structured and dynamic, and on thepre-loaded information and maps of the working area, the tasksof perception and localization have different levels of complexity.Perception and localization can be reached through subsequent ac-tions, i.e. recognition and mapping (Thrun, 2003; Zhang et al.,2007), or through the Simultaneous Localization And Mappingtechniques (SLAM) (e.g. Durrant-Whyte & Bailey, 2006). If the up-dated knowledge that ensues from the perception and localizationtasks is used in order to bridge the information gap present on ahigher software level agent, complex and articulated problemscan be faced and solved. Robot navigation is related to the taskand service to carry out and passes through the planning and actu-ation phases.

In this article, according to Wooldridge’s definition (Woold-ridge, 2002), autonomous robots are considered as physical auton-omous agents. The integration between the capabilities of softwareand physical agents is described further in this paper.

2.3. Semantic web services

Traditionally, the Web has been conceived as a distributedsource of information. The emergence of Web Service (WS) Tech-nology (Booth et al., 2004) has permitted to extend it to a distrib-uted source of functionality. WS make applications available on theWeb in a standard way so that they can be accessed regardless ofthe operating system they are deployed on and the programminglanguage they are implemented in. In the stack of emerging stan-dards for Web Services, three major layers can be highlighted.UDDI (Universal Description, Discovery and Integration) providesa mechanism for clients to find Web Services by defining an stan-dard way to publish and discover information about them. WSDL(Web Services Description Language) provides a description of con-nection and communication with a particular Web Service bydescribing its functionality using an XML language. Finally, SOAP(Simple Object Access Protocol) is an standard that defines XMLformatted messages between two applications over the Internetprotocols. The main limitation of this technology is that, due tothe growing of the Web in size and diversity, there is a lack of auto-mation necessary for satisfying needs such as discovery, execution,

selection, composition and inter-operation of WS (Fensel & Bussler,2002).

On the other hand, the Semantic Web (SW) aims at addingsemantics to the data published on the Web (i.e., establish themeaning of the data), so that machines are able to process thesedata in a similar way a human can do (Berners-Lee, Hendler, &Lassila, 2001). The SW is therefore characterized by the associationof machine-accessible formal semantics with the traditional Webcontent. Ontologies are the standard knowledge representationtechnology in the SW. For the purposes of our work, we haveadopted the following definition of ontology: ‘‘an ontology is a for-mal and explicit specification of a shared conceptualization’’ (Studeret al., 1998). In this context, formal refers to the need of ma-chine-understandable ontologies, which eventually enable auto-matic reasoning. Besides, the ontology language selected in thiswork was OWL (Web Ontology Language) (McGuinness & vanHarmelen, 2004), namely, the current W3C (World Wide Web Con-sortium) Semantic Web standard ontology language.

Semantic Web Services (SWS) are the joint application of WSand the SW (McIlraith, Son, & Zeng, 2001). They consist of describ-ing the services with semantic content so that service discovery,composition and invocation can be done automatically by, forexample, the use of software agents able to process the semanticinformation provided. SWS are thus defined through a serviceontology enabling machine interpretability of its capabilities. Astandard for the SWS technology has not yet been defined and dif-ferent approaches such as OWL-S (Martin et al., 2007), WSMO (Ro-man et al., 2005), SWSF (Battle et al., 2005), WSDL-S (Li et al., 2006)and SAWSDL (Verma & Sheth, 2007) (current W3C recommenda-tion) are available.

3. The integrated framework idea

The new framework aims to create a platform that exploits ser-vices from both the physical and the logical worlds by using robotsand WS, respectively, with the purpose of bringing together on-lineand off-line resources in order to provide users with added-valueservices.

3.1. Layout

The proposed framework is based on a four layer architecture(see Fig. 1). The lower layer, i.e. ‘‘Ground Layer’’, contains thesources of functionality in the system. It has been split up intotwo groups: sources of functionality from the logical world and

R. Vidoni et al. / Expert Systems with Applications 38 (2011) 7430–7439 7433

sources of functionality from the physical world. By sources offunctionality from the logical world we refer to the business pro-cesses that take place in organizations and constitute their opera-tional engine. A company’s business logic comprises business rulesthat express the business policy, and workflows (i.e. ordered tasksof passing documents or data from one participant to another).Putting it all together provides the basic ingredients for the publi-cation of functionality on the Web. On the other hand, the sourcesof functionality from the physical world can be almost anything,from houses and machinery to the weather and other environmen-tal elements. Changes in the environment can be produced by actu-ators and perceived by sensors. These changes produced/perceivedin the environment can be advertised as functionality provided bythe system.

The second layer in the multi-tier infrastructure is the ‘‘ServicesLayer’’. In this layer, the functionality that emerges from the lowerlayer is exhibited online in the form of services in two flavors: WebServices (WS) and robot-provided services. WS account for some ofthe companies’ internal business processes. Given that millions ofservices can be accessible worldwide at the same time, it is neces-sary a mechanism to automate the discovery and management ofthose services that are needed at a particular temporal point. Thesemantic description of the service capabilities is meant to solvethat particular issue. By formalizing the description of the func-tionality that WS offer, we enable machines (i.e., software pro-grams) to autonomously process that information and interactwith services without the need of human intervention.

Robot-provided services refer to the features physical agents inthe real world possess and may be of use and provide utility forend-users. Industrial robots and more autonomous robots can beused as services. Indeed, both of these systems can alter their sur-rounding world by means of the actuators and perceive changes inthe environment through their sensors. Industrial robots can beexploited in their particular environment while mobile robotscan be exploited in different and day-by-day less structured envi-ronments (i.e., indoors or outdoors). These functions can be an-nounced as services that the system provides through the robots’APIs (Application Program Interface) and communicationprocedures.

The third tier is the ‘‘Intelligent Agents Layer’’. Agents areresponsible for acting as intermediates between end-users andthe services available in the ‘‘Services Layer’’. The role of agentsis threefold:

1. act on behalf of service consumers;2. act on behalf of service producers;3. perform management tasks.

The agents that act on behalf of service owners manage the ac-cess to services (i.e., negotiation and invocation) and ensure thatthe contracts are fulfilled (through monitoring activities). On theother hand, the agents acting on behalf of service consumers haveto locate services (through discovery and composition processes),agree on contracts (by means of negotiation and selection activi-ties), and receive and present the results. Last but not least, theagents that carry out management tasks have to ensure the systemstability and monitor the status of all interactions.

Finally, the ‘‘Application Layer’’ contains the applications thatare built on top of the framework. There are no constraints onthe types of applications nor the domains that they can be appliedin. In order to customize a particular application in a given domain,it is necessary to organize (orchestrate and coordinate) a set ofagents to actually carry out useful activities on behalf of users. Infact, depending on the agents available in the system and theway they interoperate, different user-tailored applications can beobtained.

3.2. Architecture

The framework basic pillars are: users (service consumers), (ro-bot- and Web-provided) services and agents (facilitators). The inte-grated system, built on the basis of SEMMAS (García-Sánchez,Fernández-Breis, Valencia-García, Gómez, & Martínez-Béjar, 2008,2009), which is a framework for seamlessly integrating IA andSWS, is composed of three loosely-coupled components (seeFig. 2): a Multi-Agent System (MAS) with different types of agents,an ontology repository in which several ontologies are stored, anda bunch of services divided into two distinguished groups: WebServices and robot-provided services.

3.2.1. The MASThe MAS operates as an intermediary between users and ser-

vices and is composed of three different types of autonomousagents, which are (1) service-representative agents (i.e. ‘‘ProviderAgent’’, ‘‘Service Agent’’ and ‘‘Robot Agent’’), (2) user-representa-tive agents (i.e. ‘‘Customer Agent’’, ‘‘Discovery Agent’’ and ‘‘Selec-tion Agent’’), and (3) system-management agents (i.e.‘‘Framework Agent’’ and ‘‘Broker Agent’’).

The agents that function as service representatives are respon-sible for managing the access to services and ensuring that the con-ditions agreed for the service execution are satisfied. Agents thatact as user representatives are in charge of locating the appropriateservices, agreeing on contracts with their providers, and receivingand presenting the results of their execution.

The ‘‘Robot Agent’’, which has been added to the SEMMAS plat-form, interfaces the software agents environment with the realworld and undertakes the ‘‘robot’’ role, which includes the meansto communicate with the physical robot. Hence, this agent enclosesthe standard capabilities of the software agents and special purposeabilities: it is charged to open and close the communication with thecorrect autonomous robot, translate the task or the requests infor-mation into robot commands, and manage the results and the re-quired information that come from the different available sensors.Due to the kind of robot that has to be commanded, the robot agentcan act directly on the robot or only send the task and receive the re-sults. Indeed, if for example an industrial manipulator has to beoperated in order to carry out a particular action, the ‘‘Robot Agent’’can directly open the communication, control the robot, order theactions to do, read the sensors and close the communication. In sucha case, the integrated system made of the ‘‘Robot Agent’’ and thePhysical Robot becomes a true autonomous system.

If a mobile autonomous robotic system has to be driven, the ‘‘Ro-bot Agent’’ has to open the communication, translate and communi-cate the overall task information into the robot language, and waitfor an answer. In this case, the basic actions (i.e., rotate, read the sen-sor, turn) are autonomously made by the robot. Hence, both the ‘‘Ro-bot Agent’’ and the physical robot are autonomous agents. Thecommunication with robots has to be robust and effective and canbe realized at different levels that depend on both the working envi-ronment and the availability of sources. Different wireless commu-nication capabilities have to be integrated and available in order toask and command to, and receive answers and information from therobots. Wireless radio (i.e. Wi-Fi and Bluetooth) or infrared commu-nication devices can be mounted and easily integrated into the ro-botic systems, so creating robots that can be reached, questionedand used in order to share, discover and use information.

3.2.2. Ontology repositoryVarious ontologies must be considered for the system to prop-

erly function and the need to deal with the semantic heterogeneityissue (Noy & Halevy, 2005) is satisfied through the ‘‘Broker Agent’’mediation mechanisms (see (García-Sánchez et al., 2008, García-Sánchez, Martínez-Béjar, Valencia-García, & Fernández-Breis,

Fig. 2. The framework architecture.

7434 R. Vidoni et al. / Expert Systems with Applications 38 (2011) 7430–7439

2009) for a more detailed discussion of the way the frameworkfaces interoperability issues).

The exploited ontologies are categorized into five groups: do-main ontologies, application ontologies, agent local knowledge ontol-ogies, semantic Web Services ontologies and negotiation ontologies.In this approach, the:

� domain ontology represents a conceptualization of the specificdomain the framework is going to be applied in. This ontologysupports the communication among the components in theframework without misinterpretations;� application ontology may involve several domain ontologies to

describe a certain application. For the purposes of this work,the application ontology embraces the knowledge entities (i.e.concepts, attributes, relationships, and axioms) that model theapplication in which the framework is to be used.� service ontology is the small piece of ontology used by a single

service for service description. We assume the existence ofone or various remote ontology repositories containing thesemantic description of the available services. As stated before,the framework does not impose any restriction in terms of thekind of SWS specification (i.e. OWLS, WSMO, SWSF, WSDL-Sor SAWSDL) to be used.� negotiation ontology, which comprises both negotiation protocols

and strategies, so that agents can choose the best mechanism atrun-time. Indeed, when a group of individual agents form a MAS,the presence of a negotiation mechanism to coordinate such agroup becomes necessary. The appropriateness of the negotia-tion mechanisms in a particular situation highly depends onthe problem under question and the application domain.� agent local knowledge ontology generally includes knowledge

about the assigned tasks as well as the mechanisms andresources available to achieve those tasks. Thus, for example,the knowledge ontology for the ‘‘Broker Agent’’ may containthe mapping rules it has to apply to resolve the interoperabilitymismatches that might occur during the system execution.

By selecting a particular set of ontologies, the developer cancustomize the framework and make it work in a concrete domainto solve a specific kind of problem.

3.2.3. ServicesServices are the entities that provide all the functionality that

the system can show off and are: Web Services (WS) and robot-provided services (RS). WS account for some of the companies’internal business processes. RS refer to the features physical agentspossess and may be of use and provide utility for end users.

The semantic description of the services capabilities is meant tocreate a mechanism to automate the discovery and management ofthe services. By formalizing the description of the functionalitythat WS offer, software programs can autonomously process theinformation and interact with services without the need of humanintervention.

From a general point of view, robots can alter the world bymeans of the so-called actuators and perceive changes in the envi-ronment through their sensors. These functions can be announcedas services that the system consumes through robots’ APIs (Appli-cation Program Interfaces). Indeed, physical robots can be used forachieving different services as the simple and direct knowledgeacquisition (e.g., vision, distance, sound and touch sensors), ormore complex behaviors and actions as search, find, count, manip-ulate and move objects, discover, analyze, interact with humans(e.g., request and get info) by means of the merge of intelligentcapabilities, sensors and actuators.

Notice that all these capabilities offer data and knowledge di-rectly from the real world, so creating a source of real-time, up-to-date information. Moreover, the active manipulation of theenvironment and the possible subsequent direct sensing, offerimportant capabilities that cannot be reached only by means ofcommon Web Services.

4. Framework application domain

4.1. Use case scenario

Various governmental bureaus keep traffic information (both atregional and national levels) stored into databases, these being reg-ularly updated. Commonly, this information is to some extent publicand accessible to all citizens. Traffic information is generally moreprecise and up-to-date for big cities and severely congested roads.

R. Vidoni et al. / Expert Systems with Applications 38 (2011) 7430–7439 7435

However, when it comes to side and small streets or less-trav-eled roads, there is little or even none information available. Thislack of complete information makes it difficult for machines toautomatically calculate an optimal route from a location to anotheron the basis of traffic conditions. In order to fill in this gap, robotsmay play an important role here. Thus, given the high mobility andflexibility robots may have, they might be used to gather trafficdata of the streets from which public databases do not have anyinformation. In this manner, users could have access to both theinformation coming from governmental databases and the livetraffic data robots may provide. A sample scenario is shown inFig. 3. In this scenario, a citizen is willing to get to the city centerthrough the optimal path. This goal is sent to the system, whichfirst obtains information about all the possible routes to get tothe target, and then finds out information about the traffic condi-tions in all the streets involved in the proposed routes. With allthe information obtained, the route containing the less-traveledstreets can be returned.

This process is described step-by-step next.

1. Query input: the citizen, by means of a Web browser, sends thequery in natural language. The query is received by an instanceof the ‘‘Customer Agent’’ that will represent and act on behalf ofthat particular citizen within the system. The ‘‘Customer Agent’’incorporates a natural language processing tool capable oftranslating the query into machine-understandable (ontologi-cal) language.

2. Service discovery, composition and selection: the ‘‘CustomerAgent’’ sends the goal ontology to an instance of the ‘‘SelectionAgent’’, which starts looking for services that can fulfill the goal.As none of the available services will probably be able toachieve the goal by itself, the ‘‘Selection Agent’’ decomposesthe goal into finer-grained subgoals. The decompositioninvolves, first, to get different possible routes in order to accessthe target address and, second, to get information about thetraffic conditions in the streets taking part of those routes. Atthis point, the system will probably discover several servicescapable of fulfilling each subgoal and then selects the ones thatbetter suit the user’s needs (in terms of price, dispatch time,trustability, etc.). This is done by means of an instance of the‘‘Selection Agent’’. The discovery process is carried out basedon (1) the matching between the goal ontology and the WSsemantic description, and (2) the detection of robots that pro-vide information about relevant areas. For service selection, anegotiation process takes place that tries to reconcile userrequirements with service providers’ conditions.

Fig. 3. Traffic

3. Service execution: once the services have been selected, theseare orchestrated for their execution. For it, the ‘‘CustomerAgent’’ has to notify the pertinent instances of the ‘‘ServiceAgent’’ and the ‘‘Robot Agent’’ (i.e., the representatives of theWS and robots involved in the goal resolution, respectively)that they must initiate the service execution process. A ‘‘ServiceAgent’’ is basically a WS client that has the means to draw, fromthe semantic description of a service, the methods it has toinvoke and the parameters that are necessary for the invoca-tion. A ‘‘Robot Agent’’, on the other side, has access to the robotAPI and invokes it by means of RPCs (Remote Procedure Calls).

4. Presenting the results: the ‘‘Customer Agent’’ is responsible forintegrating the results of the execution of all the services. Whenthe execution of all the services has finalized, the correspondingagents send the results to the ‘‘Customer Agent’’. By using thegoal ontology, the domain ontology and the services’ semanticdescriptions, this agent is able to compose a unique responseto be sent to the citizen.

5. System management: during this process, the ‘‘FrameworkAgent’’ is monitoring all the activities taking place in the sys-tem. If it finds out that the system has gotten blocked, the‘‘Framework Agent’’ initiates the procedure to unblock it. Onthe other hand, when any agent receives a message it cannotinterpret, it sends the message under question to the ‘‘BrokerAgent’’, which is in charge of translating the message contentsinto terms understandable by the corresponding agent. For this,the ‘‘Broker Agent’’ has access to the mapping rules between theontologies in the system.

4.2. Implementation

A test-case has been implemented as a first attempt in order totake advantage of both physical and software agents with the pur-posed framework idea. The chosen application is a path planningapplication where a specific task, i.e. starting point and final target,has to be reached by minimizing particular variables (e.g., time,distance traveled). The realized application is simplified with re-spect to the above described framework in order to test the good-ness of the idea, and only three types of software agents areexploited.

The system structure is shown in Fig. 4. The types of softwareagents implemented are: Path Planning Agent (i.e. ‘‘ServiceAgent’’), PHYsical Agent (i.e. ‘‘Robot Agent’’) and a Brain Agent,which exploits the function of the other agents. Moreover, theWeb service is simulated as a database with a map of the roadsand crosses, while the physical agents are real robots. The Brain

scenario.

7436 R. Vidoni et al. / Expert Systems with Applications 38 (2011) 7430–7439

Agent (BA) is the core of the system and acts in order to takeadvantage of all the Path Planning agent (PPA), the PHYsical Agent(PHYA) and the robots for reaching a particular task and updatingthe database information about the obstructed lines met. Hence,the BA plays the system master role and the other agents act asthe system slaves.

The BA is the core of the whole system. This agent receives theinformation concerning the overall job to do, processes it andautonomously makes the adequate decisions in order to ask andfind an optimal solution and path (e.g., the shortest path).

The PHYA acts in order to implement the task, commands to therobot the actions that have to be done and informs the BA aboutthe action result (i.e., success or failure).

By taking into account the available information, the PPA iscalled by the BA in a request action in order to find and suggestthe best path that the robot has to follow.

The real environment is a structure of roads and crosses(nodes): the crosses can become the points where the robot hasto go to. Moreover, a map of the overall environment is exploitedas a ‘‘database’’ (i.e., it plays a Web service role).

As illustrated in Fig. 5, where the conceptualization of the dataflow is presented, the agents behavior can be accurately described.

4.2.1. The brain agent (BA)This agent is the core of the system. It acts as the system master

and, once it receives the task to do, it works autonomously gener-ating all commands, requests and actions in order to successfullyperform the task. In particular, firstly, the BA evaluates the taskthat has to be carried out and, then, requests the PPA to evaluatethe situation and the available data in order to suggest the bestpath to follow. In order to do all that, a message with the startingand target points is sent to the PPA.

The BA waits until a suggestion for the task becomes available(i.e., it receives a propose message from the PPA) and, after that,it evaluates the message and converts it into a set of commands forthe PHYA. Such a message contains information about all thecrosses that have to be traveled and information about how todo it, like the directions to choose on the crosses and the orienta-tion that the robot must have in order to comply with them. Oncethe commands to the PHYA have been sent, the BA switches to awaiting state.

DataBasePathPlanning Agent

DataBasePathPlanning Agent

SenderBrain-Core

Agent

Physical Agent

Physical SystemRobot

Physical Agent

Physical SystemRobot

LocalizationNavigation

ExplorationCommunication

Fig. 4. Basic scheme of the application (continuous line = basic application; dottedline = extension).

When an informative message from the PHYA is received,the BA reads it in order to understand what has occurred and, ifthe target has been successfully reached, the BA waits for anotherexternal command.

If a failure occurs, a new request for the PPA is sent. This requestcontains not only information about the actual starting point andthe same target point (which was not reached with the previouscommands), but also the information about the non-reached node(e.g., because the road is obstructed by a new obstacle) in order toupdate the map database and try to find a new optimal path.

4.2.2. The PHYsical Agent (PHYA) and the environmentThe PHYA is the autonomous agent responsible for receiving the

path that the robot has to follow in its real environment, translat-ing it into robot commands, opening/closing the communication,receiving data and messages from the robot, and transmittingthem, appropriately managed, into the BA.

This kind of agent remains at a waiting state until a messagewith a new task is received from the BA. Then, this message isfirstly evaluated in order to understand the content (i.e., a realmessage or an error). Secondly, if the message contains a correctmessage, the content is split up in order to recognize the startingand the final point of the task and the nodes that must be crossed.

Thirdly, the agent starts the communication with the robot thatis actually the nearest with respect to the starting point of thechain and commands it the direction to be traveled and the nextcross that has to be reached. The communication remains openedfor all the time that occurs for waiting the robot’s operations –the robot follows the chosen road until the subsequent cross oran obstacle is found –, for sending the subsequent sub-tasks (i.e.,future node, travel direction, road to keep at the cross), for under-standing if the (sub) task is reached or a new obstacle (e.g. a queue)is found.

Once the overall task is reached or an obstacle is found, thePHYA sets up a new information message for the Brain Agent con-taining the data of the last reached node (and the orientation of therobot), the target node, and, in case of obstacle, the first non-reached node of the task. If an obstacle is found, the robot is movedin order to travel back to the previous reached node.

Finally, the communication with the robot is closed, the mes-sage is sent to the BA and the PHYA switches to a waiting state un-til a new message with a new task is received.

4.2.3. The Path Planning Agent (PPA)The PPA is the one in charge of finding the best path for the pur-

posed task by exploiting the available information. The optimalpath planning is calculated by using AI algorithms. The availableinformation in this case (i.e., Web Services) is constituted by themap of the environment with roads and crosses, and the load asso-ciated to each way between two crosses (i.e., distance). Dependingon the information that the BA sends, these loads are continuouslyupdated in order to have a better, up-to-date representation of thereal world.

The request message is evaluated and, after updating the map,whose necessity depends on the state of the action, the startingand target points are extracted. These points are then processedby an intelligent algorithm that creates the best path. The Dijkstraalgorithm has been implemented for it.

Once the solution is found, a suggestion message is sent tothe BA and the PPA switches to a waiting state until another re-quest arrives.

4.3. Experimental results

A physical environment has been implemented through a gridcomprised of several roads and crosses, a set of obstacles and a

MSG ?

Target ?

N

Y

Y

N

Updatenodes

New pathcalculation

Send MSGto BRAIN

Path Calculation

New Target

MSG ?

Y

N

Receive andtranslate MSG

Send MSGto

PHYSICAL AGENT

CFP

PROPOSE REQUEST

MSG ?

Y

Receive andtranslate MSG

OPENCOMM

Send and receivedata to and from

the ROBOT

ROBOTEXECUTE THE TASK

followline search cross

stop if obstaclechange direction at crosses

backward to cross if obstacle

OBSTACLEor

TARGET

translate DATAAND

SET-UP MSG

INFORM

Brain Agent Physical Agent

ROBOT

N

Fig. 5. Logical structure and flux of data and info of the system.

R. Vidoni et al. / Expert Systems with Applications 38 (2011) 7430–7439 7437

set of robots. As illustrated in Fig. 6(a), the roads have been repre-sented by white lines in the black background. The obstacles arephysical elements that can be putted or leaved out on the path thata robot has to follow.

In the experiment, a LEGO NXT robot has been employed on athree wheeled (tricycle) configuration with two motors that con-trol the two tractor wheels (see Fig. 6(b)). The robot is equippedwith different sensors. In particular, two classes of them have beenused. Two light sensors have been installed and located at the ro-bot front in order to follow the chosen road and recognize thecrosses. Also, an ultrasonic distance sensor has been included inthe robotic system in order to evaluate if there are obstacles in arange between 10 and 15 cm.

Java is the programming language that has been used to imple-ment the overall system. In particular, in order to command andcontrol the robot, the ICOMMAND API has been exploited and

Fig. 6. Experimen

the leJOS firmware has been employed (leJOS, 2009). Regardingthe software agents, the Java Agents DEvelopment framework(JADE (Bellifemine et al., 2008)) has been used.

A test-example is depicted in Fig. 7, where the starting point isthe node B and the target is the node P. The actual obstacle is a newobstacle not present in the database and all the node-to-nodestretch of road loads have the same initial value. The computedbest way with the available stored data is the B-F-L-P path. Thispath is sent to the robot in B that moves correctly and followsthe target reaching firstly the node F and secondly the node L. Afterthat, when it tries to run on the L-P road, the ultrasonic sensor rec-ognizes the obstacle and the robot stops its motion, sends a mes-sage to the PHYA informing that the road is obstructed andmoves in order to reach again the node L but oriented in the L-Fdirection. When the node L is reached, the robot stays at thisposition waiting for a new path to follow. On the other side, the

tal test-bed.

Fig. 7. Scheme of a possible environment of roads and crosses (the crosses areinserted in an adjacency matrix and a letter is associated at each one. Each way isdefined considering the previous and the following nodes with respect to the nodeunder consideration).

7438 R. Vidoni et al. / Expert Systems with Applications 38 (2011) 7430–7439

software agents update the database with the new obstacle and re-calculate the best path taking into account the new starting posi-tion and orientation of the robot. Once computed, the new bestpath calculated by means of the Dijkstra algorithm (L-I-O-P or L-M-Q-P) is sent to the robot, which then moves and follows thenew target until reaching the P node. The information about the fi-nal node reached is then sent to the PHYA, which informs the BAabout it. All the system and agents are set into a waiting state untila new target is defined.

Different tests with different loads, paths and obstacles havebeen carried out. Due to the reduced complexity of the logical sys-tem and algorithm implemented, the variety of the color (i.e.,white or black) and obstacle identification (i.e., yes or not) andthe effectiveness of the Bluetooth communication in a test labora-tory, with distances ranging between 1 and 5 meters, the overallimplemented system does not present significant failures or errors.Hence, the tests carried out show that the system works correctlyand that the idea to exploit both the data provided by the Web ser-vice agent and the data collected through the ‘‘Robot agent’’ andthe physical robots is effective and can lead to complex and pow-erful systems.

5. Conclusions and future work

In this work, we describe a system deployed as a platform ableto exploit the large amount of information available on the Weband databases, together with the information that can be collectedby physical systems (robots) in a real, dynamic environment.

The proposed platform seamlessly integrates four main types ofelements: Web services, robot-provided services, intelligent agentsand ontologies. Ontologies are the true key for achieving the plat-form feasibility, as they enable the effective communication be-tween the agents in the platform and the available services. Withthis approach, software applications can benefit from the auton-omy, pro-activeness, dynamism and goal-oriented behavior thatphysical agents (i.e., robots) can provide, so that both the Weband robots can be successfully used to obtain useful, relevantinformation.

In order to test the goodness of the framework proposed here,artificial intelligence and (software and physical) agents tech-niques have been used to implement a test-case application thathas been applied in a traffic control scenery. The problem consistedin getting a simplified path-route planning in a physical world ofroads and crosses where the available data or information aboutthe free roads can change over time.

The application works with an agent (i.e., the Brain Agent) thatworks autonomously and coordinates two other kinds of agentswhose functionality is the following. The mission of the Path Plan-ning agent is to find the best path (currently a Dijkstra algorithmhas been adopted but different or concurred intelligent algorithmscan be used and implemented in future) by processing the avail-able information and updating it according to the informationcoming from the physical world. On the other hand, the PhysicalAgent commands the robots information about both the tasks tobe performed and the states to be reached. This agent also collectsthe information about the real world that robots gather throughtheir sensors.

The implemented system shows an effective behavior allowingto encourage perspectives for future, more complex applications.Several issues remain to be addressed. First, as it has been pointedout above, when dealing with several disparate ontologies, a medi-ation mechanism is necessary. In our system, this problem hasbeen overcome by adopting and implementing ad hoc solutionsand using simplified test case scenarios in order to delimit theproblem. Similarly, other service-related functions (e.g. discovery,selection, composition, etc.) have been implemented ad hoc. Theroles-based approach makes easier the inclusion, in the form ofplugins, of already tested, more sophisticated implementationsfor all these components. Second, future research will cover boththe increasing of the capabilities of the robots and the implemen-tation of other concurrent agents. Finally, real traffic applicationswith agents that can search for information on the Web, databaseswith maps of real roads of a city, and robots with better sensingand communication capabilities will be investigated.

Acknowledgement

This work has been partially supported by the Spanish Ministryfor Industry, Tourism and Commerce under projects TSI-020400-2009-127, TSI-0204000-2009-148 and TSI-020100-2009-263, andhas been developed inner the PRIN 2006 project, granted by theItalian Ministry of Instruction University and Research (MIUR).

References

Battle, S., Bernstein, A., Boley, H., Grosof, B., Gruninger, M., Hull, R., Kifer, M., Martin,D., McIlraith, S., McGuinness, D., Su, J., & Tabet, S. (2005). Semantic web servicesframework (swsf) overview. W3c member submission 9 september 2005,World Wide Web Consortium.

Bellifemine, F., Caire, G., Poggi, A., & Rimassa, G. (2008). Jade: A software frameworkfor developing multi-agent applications. Lessons learned. Information &Software Technology, 50(1–2), 10–21.

Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The semantic web. ScientificAmerican(May), 34–43.

Bertrand, R., Bruckner, J., & van Winnendael, M. (2001). Nanokhod – a micro-roverto explore the surface of mercury. In Proceedings of the 6th internationalsymposium on artificial intelligence and robotics & automation in space: i-SAIRAS2001. Canadian Space Agency, St-Hubert, Quebec, Canada.

Booth, D., Haas, H., McCabe, F., Newcomer, E., Champion, M., Ferris, C., & Orchard, D.(2004). Web services architecture. W3c working group note, World Wide WebConsortium.

Bradshaw, J., Dutfield, S., Benoit, P., & Woolley, J. (1997). KAoS: Toward an industrial-strength generic agent architecture.

Bradshaw, J.M., Uszok, A., Jeffers, R., Suri, N., Hayes, P.J., Burstein, M.H., Acquisti, A.,Benyo, B., Breedy, M.R., Carvalho, M.M., Diller, D.J., Johnson, M., Kulkarni, S., Lott,J., Sierhuis, M., & van Hoof, R. (2003). Representation and reasoning for daml-based policy and domain services in kaos and nomads. In AAMAS (pp. 835–842).ACM.

Bradshaw, J. M., Greaves, M., Holmback, H., Karygiannis, T., Jansen, W., Silverman, B.G., et al. (1999). Agents for the masses? IEEE Intelligent Systems, 14(2), 53–63.

Bradshaw, J. M., Suri, N., Cañas, A. J., Davis, R., Ford, K. M., Hoffman, R. R., et al.(2001). Terraforming cyberspace. IEEE Computer, 34(7), 48–56.

Bruijn, O.D., & Stathis, K. (2003). Socio-cognitive grids: The net as a universal humanresource. In Proceedings of tales of the disappearing computer.

Clancey, W., Sierhuis, M., Kaskiris, C., & van Hoof, R. (2003). Advantages of brahmsfor specifying and implementing a multiagent human-robotic explorationsystem. In Proceedings of the FLAIRS 2003.

Dautenhahn, K., Woods, S., Kaouri, C., Walters, M., Koay, L., & Werry, J. (August 2–62005). What is a robot companion – friend, assistant or butler? In Proceedings of

R. Vidoni et al. / Expert Systems with Applications 38 (2011) 7430–7439 7439

the IEEE international conference on intelligent robots and systems, IROS 2005 (pp.1488–1493). Edmonton, Alberta, Canada.

Durrant-Whyte, H., & Bailey, T. (2006). Simultaneous localisation and mapping(slam): Part i the essential algorithms. IEEE Robotics and Automation Magazine,13, 99–110.

Elamy, A. (2005). Perspectives in agent-based technology. AgentLink News(18),19–22.

Farajmandi, M., Gu, J., Meng, M., Liu, P., & Chen, Y. (2003). Internet-based wirelessmobile robot network. In Proceedings of the IEEE intelligent automationconference, 2003. Vol. 2 (pp. 549–554). Hong Kong.

Fensel, D., & Bussler, C. (2002). The web service modeling framework wsmf.Electronic Commerce Research and Applications, 1(2), 113–137.

Freedy, A., McDonough, J., Jacobs, R., Freedy, E., Thayer, S., Weltman, G., Kalphat, M.,& Palmer, D. (2004). A mixed initiative human-robots team performanceassessment system for use in operational and training environments. InProceedings of the PerMIS 2004.

Freedy, A., Sert, O., Freedy, E., Weltman, G., Mcdonough, J., Tambe, M., & Gupta, T.2008. Multiagent adjustable autonomy framework (maaf) for multirobot,multihuman teams. In International symposium on collaborative technologies(CTS).

García-Sánchez, F., Fernández-Breis, J., Valencia-García, R., Gómez, J., & Martínez-Béjar, R. (2008). Combining semantic web technologies with multi-agentsystems for integrated access to biological resources. Journal of BiomedicalBiomedical Informatics, 41(5), 848–859. Special issue on semantic mashup ofbiomedical data..

García-Sánchez, F., Martínez-Béjar, R., Valencia-García, R., & Fernández-Breis, J.(2009). An ontology, intelligent agent-based framework for the provision ofsemantic web services. Expert Systems with Applications, 36(2P2), 3167–3187.

Hollingum, J. (1999). Robots for the dangerous tasks, robots for the dangerous tasks.Industrial Robot: An International Journal, 26(3), 178–183.

Innocenti, B., Lòpez, B., & Salvi, J. (2009). Resource coordination deployment forphysical agents. In From agent theory to agent implementation, 6th internationalworkshop AAMAS (2008).

Johnson, M., Feltovich, P., Bradshaw, J., & Bunch, L. (2008a). Human-robotcoordination through dynamic regulation. In Proceedings of the IEEE ICRA’08.Pasadena, CA, USA.

Johnson, M., Intlekofer, K., Jung, H., Bradshaw, J., Allen, J., Suri, N., & Carvalho, M.(2008b). Coordinated operations in mixed teams of humans and robots. InProceedings of the IEEE DHMS 2008.

Kaminka, G. A. (2004). Robots are agents, too! AgentLink News(16), 16–17.Kaminka, G.A. (2007). Robots are agents, too! In Durfee, E.H., Yokoo, M., Huhns,

M.N., Shehory, O. (Eds.), AAMAS. IFAAMAS (p. 4).leJOS, 2009. Java for lego mindstorms. Web available, <http://lejos.sourceforge.net>.Li, K., Verma, K., Mulye, R., Rabbani, R., Miller, J. A., & Sheth, A. P. (2006). Designing

semantic web processes: The wsdl-s approach. In J. Cardoso & A. P. Sheth (Eds.),Semantic Web services. Processes and applications, semantic Web and beyondcomputing for human experience (Vol. 3, pp. 161–193). Springer.

Martin, D., Burstein, M., Mcdermott, D., Mcilraith, S., Paolucci, M., Sycara, K., et al.. Bringing semantics to web services with owl-s (Vol. 10). Hingham, MA, USA:Kluwer Academic Publishers. pp. 243–277.

McGuinness, D., van Harmelen (eds.), F. (2004). Owl web ontology languageoverview. W3c recommendation 10 feb 2004, World Wide Web Consortium.

McIlraith, S., Son, T., & Zeng, H. (2001). Semantic web services. IEEE IntelligentSystems, 16(2), 46–53.

Noy, N. A. D., & Halevy, A. (2005). Semantic integration. AI Magazine, 26(1), 7–10.Obraczka, K., Boice, J., Weitzenfeld, A., Martinez, L., Francois, J., & Levin, A. (October

15–17 2007). Star: Ad-hoc wireless networking for autonomous multi-robotcoordination. In Proceedings of the Robocomm 2007. Athens, Greece.

Pavón, J., Gómez-Sanz, J., & Fuentes, R. (2005). Agent-oriented methodologies. IdeaGroup Publishing. Ch. The INGENIAS Methodology and Tools (pp. 236–276).

Pineau, J. (2003). Towards robotic assistants in nursing homes: Challenges andresults. Special Issue on Socially Interactive Robots, Robotics and AutonomousSystems, 42, 3–4.

Pitt, J., & Artikis, A. (2003). Socio-cognitive grids: A partial alfebiite perspective. InFirst international workshop on socio-cognitive grids.

Pynadath, D. V., & Tambe, M. (2003). An automated teamwork infrastructure forheterogeneous software agents and humans (Vol. 7). Hingham, MA, USA: KluwerAcademic Publishers. pp. 71–100.

Roman, D., Keller, U., Lausen, H., de Bruijn, J., Lara, R., Stollberg, M., et al. (2005).Web service modeling ontology. Applied Ontology, 1(1), 77–106.

Ryutov, T. (2007). A socio-cognitive approach to modeling policies in openenvironments. In Proceedings of the 8th IEEE international workshop on policiesfor distributed systems and networks.

Sacagami, Y., et al. (2002). The intelligent asimo: System overview and integration.In Proceedings of the IEEE international conference on intelligent robots andsystems, IROS2002.

Scerri, P., Pynadath, D.V., Johnson, W.L., Rosenbloom, P.S., Si, M., Schurr, N., & Tambe,M. (2003). A prototype infrastructure for distributed robot-agent-person teams.In AAMAS (pp. 433–440). ACM.

Schulz, D., Burgard, W., Fox, D., Thrun, S., & Cremers, A. (2000). Web interfacesfor mobile robots in public places. IEEE Robotics and Automation Magazine,7(1).

Sierhuis, M. (2001). Brahms: A multi-agent modeling and simulation languagefor work system analysis and design. Doctoral Thesis, University ofAmsterdam.

Sierhuis, M., Bradshaw, J., Acquisti, A., van Hoof, R., Jeffers, R., & Uszok, A. (2003).Human-agent teamwork and adjustable autonomy in practice. In Proceedings ofthe i-SAIRAS 2003.

Skrzypczynski, P. (2001). Guiding a mobile robot with an internet application. InProceedings of the IEEE/RSJ international conference on intelligent robots andsystems conference, IROS 2001. Vol. 2 (pp. 649–654).

Studer, R., Benjamins, R., & Fensel, D. (1998). Knowledge engineering: Principles andmethods. Data and Knowledge Engineering, 25(1–2), 161–197.

Suri, N., Bradshaw, J.M., Breedy, M.R., Groth, P.T., Hill, G.A., Jeffers, R., Mitrovich, T.S.,Pouliot, B.R., & Smith, D.S. (2000). Nomads: toward a strong and safe mobileagent system. In Agents (pp. 163–164).

Thrun, S. (2003). Robotic mapping: A survey. Exploring artificial intelligence in the newmillennium. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc..

Tomizawa, T., Ohya, A., & Yuta, S. (2002). Book browsing system using anautonomous mobile robot teleoperated via the internet. In Proceedings of theIEEE/RSJ international conference on intelligent robots and system, IROS 2002. Vol. 2(pp. 1284–1289).

Tomizawa, T., Ohya, A., & Yuta, S. (2003). Remote book browsing system using amobile manipulator. In Proceedings of the IEEE international conference onrobotics and automation, ICRA 2003. Vol. 1 (pp. 256–261).

Tso, F., Zhang, L., & Jia, W. (2007). Video surveillance patrol robot system in 3g,internet and sensor networks. In Proceedings of the 5th international conferenceon embedded networked sensor systems (pp. 395–396). Sydney, Australia.

Verma, K., & Sheth, A. P. (2007). Semantically annotating a web service. IEEE InternetComputing, 11(2), 83–85.

Vlassis, N. (2007). Synthesis lectures on artificial intelligence and machine learning.Morgan and Claypool Publishers. Ch. A concise introduction to multiagentsystems and distributed artificial intelligence.

Wooldridge, M. (2002). An introduction to multiagent systems. John Wiley & Sons Ltd.Wooldridge, M., & Ciancarini, P. (2000). Agent-oriented software engineering: The

state of the art. In P. Ciancarini & M. Wooldridge (Eds.), AOSE. Lecture notes incomputer science (Vol. 1957, pp. 1–28). Springer.

Zhang, X. Z., Rad, A. B., Wong, Y.-K., Huang, G., Ip, Y.-L., & Chow, K. M. (2007). Acomparative study of three mapping methodologies. Journal of Intelligent andRobotic Systems, 49(4), 385–395.


Recommended