ASSEMBLY
Automation of robotic assembly processes on the basisof an architecture of human cognition
Marcel Ph. Mayer Christopher M. Schlick
Daniel Ewert Daniel Behnen Sinem Kuz
Barbara Odenthal Bernhard Kausch
Received: 16 February 2011 / Accepted: 1 April 2011 / Published online: 21 April 2011
German Academic Society for Production Engineering (WGP) 2011
Abstract A novel concept to cognitive automation of
robotic assembly processes is introduced. An experimental
assembly cell with two robots was designed to verify and
validate the concept. The cells numerical controltermed
a cognitive control unit (CCU)is able to simulate human
information processing at a rule-based level of cognitive
control. To enable the CCU to work on a large range of
assembly tasks expected of a human operator, the cognitive
architecture SOAR is used. On the basis of a self-developed
set of production rules within the knowledge base, the CCU
can plan assembly processes autonomously and react to ad-
hoc changes in assembly sequences effectively. Extensive
simulation studies have shown that cognitive automation
based on SOAR is especially suitable for random parts
supply, which reduces planning effort in logistics. Con-
versely, a disproportional increase in processing time was
observed for deterministic parts supply, especially for
assemblies containing large numbers of identical parts.
Keywords Cognitive automation SOAR Assembly Joint cognitive systems
1 Introduction
In high-wage countries many manufacturing systems are
highly automated. The main aim of automation is usually to
increase productivity and reduce personnel expenditures.
However, it is well known that highly automated systems are
investment-intensive and often generate a non-negligible
organizational overhead. Although this overhead is man-
datory for manufacturing planning, numerical control pro-
gramming and system maintenance, it does not directly add
value to the product to be manufactured. Highly automated
manufacturing systems therefore tend to be neither efficient
enough for small lot production (ideally one piece) nor
flexible enough to handle products to be manufactured in a
large number of variants. Despite the popularity of strategies
for improving manufacturing competitiveness like agile
manufacturing [1] that consider humans to be the most
valuable factors, one must conclude that especially in
high-wage countries the level of automation of many pro-
duction systems has already been taken far without paying
sufficient attention to the specific knowledge, skills and
abilities of the human operator.
According to the law of diminishing returns that kind of
naive increase in automation will likely not lead to a sig-
nificant increase in productivity but can also have adverse
effects. According to Kinkel et al. [2] the amount of
process errors is on average significantly reduced by
automation, but the severity of potential consequences of a
single error increases disproportionately. These ironies of
automation which were identified by Lisanne Bainbridge
as early as 1987 can be considered a vicious circle [3],
where a function that was allocated to a human operator
due to poor human reliability is automated. This automa-
tion results in higher function complexity, ultimately
increasing the cognitive loads of the human operator for
M. Ph. Mayer (&) C. M. Schlick S. Kuz B. Odenthal B. Kausch
Institute of Industrial Engineering and Ergonomics,
RWTH Aachen University, Aachen, Germany
e-mail: [email protected]
D. Ewert
Institute of Information Management in Mechanical
Engineering, RWTH Aachen University, Aachen, Germany
D. Behnen
Laboratory for Machine Tools and Production Engineering,
RWTH Aachen University, Aachen, Germany
123
Prod. Eng. Res. Devel. (2011) 5:423431
DOI 10.1007/s11740-011-0316-z
planning, teaching and monitoring, and hence leading to a
more error-prone system. To reduce the error potential one
could again extend automation and reinforce the vicious
circle. During the first iteration it is quite likely that the
overall performance of an automated system will increase,
but the potential risk taken is often severely underesti-
mated. Additional iterations usually deteriorate perfor-
mance and lead to poor system robustness.
The novel concept of cognitive automation by means of
simulation of human cognition aims at breaking this
vicious circle. Based on simulated cognitive functions,
technical systems shall not only be able to (semi-) auton-
omously carry out manufacturing planning, adapt to
changing supply conditions and be able to learn from
experience but also to simulate goal-directed human
behavior and therefore significantly increase the confor-
mity with operator expectations. Clearly, knowledge-based
behavior in the true sense of Rasmussen [4] cannot be
modeled and simulated, and therefore the experienced
machining operator plays a key architectural role as a
competent problem solver in unstable and non-predictable
situations.
2 Experimental assembly cell
One of todays challenges in manufacturing is the
increasing complexity of assembly processes due to an
increasing number of products that have to be assembled in
a large variety in production space [5]. Whereas in con-
ventional automation each additional product or variant
significantly increases the organizational overhead, cogni-
tively automated assembly cells are theoretically able to
autonomously plan, execute and replan the expected tasks
on the basis of a digital model of the product to be
assembled in conjunction with a set of production rules. No
explicit knowledge on how to solve the assembly problem
is needed. Therefore, these systems allow for flexible, cost-
effective and safe assembly.
Due to their design for assembly, many of todays
industrially processed components in mass and medium lot
size production are purposefully constrained so that their
assembly is only possible in a particular sequence or only a
few procedural variations are allowed (see [6]). The
assembly of these components is too simple to fully dem-
onstrate the flexibility and effectiveness of cognitive
automation. To fully develop and validate the novel con-
cept, mountable assemblies were chosen that can be gen-
erated in an almost unlimited number of variants in small
series production. One of the requirements for the build-
ing blocks is that they allow arbitrary configuration and
are completely interchangeable. LEGO building bricks,
from the Danish company of the same name, fulfill this
requirement and were therefore used for system design and
evaluation. Unlike complex free forming components (e.g.
interior elements in an automobile), the bricks are also easy
to describe mathematically. Nevertheless they allow for
very complex work processes because of the huge number
of permutations of assembly steps. This is easily shown by
a simple example: Building a small pyramid of only five
LEGO bricks with a foundation of two by two bricks can
be done using 24 different assembly sequences.
In order to study cognitive automation, an experimental
assembly cell was designed and a manufacturing scenario
was developed [7]. The scenario is as follows: An engineer
has designed a mechanical part of medium complexity with
a CAD system. The part can contain an arbitrary number of
bricks. The task for the assembly cells cognitive control
unit (CCU) is to autonomously develop and execute a time
and energy efficient assembly sequence on the basis of the
CAD model using the available technical resources in
terms of robots, manipulators, grippers and clamping
devices, as well as supplied bricks, etc. The supply of
bricks can change dynamically.
In our assembly scenario (see Fig. 1), two robots carry
out a predefined repertoire of coordinated pick and place
operations. One robot is stationary (robot 1), the other
robot sits on a linear track (robot 2). A conveyor system
equipped with four individually controlled transfer lines,
pneumatic track switches and light barriers completes the
experimental system. The transfer lines are arranged so that
the parts can cycle around the working area. First, the
stationary robot grasps the bricks from a pallet and puts
them on a conveyor belt. The second robot, which is
waiting on the linear track for the part, has to identify the
brick, i.e. match it to a known library of bricks with respect
to color and shape. If the brick is included in the final state
of the product to be assembled, the robot will pick it from
the conveyor (which in a later step will comprise the task of
tracking the unknown position and synchronizing the robot
to a running track) and will put it on the working area
either at the corresponding position in the assembly or in a
buffer area for further processing. Otherwise, the brick can
keep circulating on the conveyor belt to reappear later or to
be removed.
3 Simulation of human cognition
An important foundation of cognitive automation is a
suitable simulation model of human cognition. Such a
model is also termed a cognitive architecture. In order to
simulate cognitive functions in a robotic assembly cell,
distinct criteria must be met.
When a function that was allocated to a human operator
has to be automated due to frequent human errors, the
424 Prod. Eng. Res. Devel. (2011) 5:423431
123
reliability of the automated function clearly is the most
important technical criterion. We distinguish between two
aspects of reliability: reliability of the execution of the
assembly processes and reliability of the cognitive simu-
lation model controlling the process. Concerning the for-
mer aspect, we do not aim at high-fidelity modeling of
human cognition including (solely from a technical point of
view) inherent weaknesses like oblivion or decision bias,
but rather want to plan and execute predictable processes
on the basis of robust symbol processing. Hence, when
accessing knowledge in the artificial memory, access
should be unlimited, so that even rarely used knowledge
can be retrieved quickly and will not be forgotten.
Concerning the latter aspect of reliability we regard the
level of maturity of a cognitive architecture as an important
criterion. Even though no absolute measure is known for
the level of maturity of such a symbolic processor, the
amount of applications, the existence of a large and active
user community and the time the architecture has been
under continuous development are all taken as sub-indi-
cators. Moreover, since the automated assembly cell should
be controlled directly via the cognitive simulation model,
another criterion is the availability of suitable interfaces for
sensors and actuators.
There are many cognitive simulation models that can be
used to automate assembly processes. A systematic review
was carried out by Chong et al. [8]. The most popular are
ACT-R [9], ICARUS [10] and SOAR [11].
In the framework of a robotized assembly cell, SOAR
was chosen as a suitable simulation model because it sat-
isfies most of the aforementioned criteria. The design of the
CCU based on SOAR as well as selected simulation results
will be presented in Sects. 4 and 5.
There are applications for SOAR in other domains. In
the military domain, TACAIR-SOAR is used for training
[12]. The system is capable of executing most of the air-
borne missions that the U.S. military flies in fixed-wing
aircraft. A speech-enabled agent is used for indirect fire
training for a Forward Observer by providing fire direction
center support using the SOARSpeak voice interface [13].
An unmanned air vehicle controlled onboard by SOAR was
developed and tested by Putzer [14]. A detailed overview
of using SOAR for the control of unmanned vehicles can
be found in Onken and Schulte [3].
In the field of mobile robotics, a gait control system
based on SOAR was developed for a six-legged robot that
is able to move on unlevel terrain, avoid obstacles and walk
to a pre-specified GPS location [15].
However, the only application of SOAR that can be
related to manufacturing systems is the system called
ROBO-SOAR [16]. It is able to solve the three blocks
problem with outside guidance from a human operator.
The system incorporates camera surveillance and a robot
performing pick-and-place operations. No explicit knowl-
edge on how to solve the problem has to be input to the
system beforehand. This also holds true for the self-
developed CCU, which will be presented in the next
section.
4 Architecture of cognitive control unit
Cognitive systems for the automation of production pro-
cesses have to meet many functional and non-functional
requirements [17] through the design of the software
architecture. The system has to work on different levels of
Fig. 1 Design of theprototypical assembly cell [7]
Prod. Eng. Res. Devel. (2011) 5:423431 425
123
abstraction. This means, for instance, that the reasoning
mechanism cannot work on the raw sensor readings.
Instead an intermediate software component is required to
fuse and aggregate the sensor data. To meet the require-
ments, a multilayer software architecture [18] was devel-
oped, as depicted in Fig. 2.
The software architecture is separated into four layers
which incorporate the different mechanisms needed to
simulate human cognition. The presentation layer includes
the humanmachine interface and an interface for the
modification of the knowledge base. The planning layer is
the deliberative layer in which the actual decision for the
next action in the assembly process is made. The coordi-
nation layer provides services to the planning layer that can
be invoked by the latter to start action execution. The
reactive layer is responsible for a low response time reac-
tion of the whole system in case of an emergency situation.
The knowledge module contains the necessary domain
knowledge of the system in terms of production rules.
At the beginning the human operator assigns the desired
goal g* to the CCU via the presentation layer. The desired
goal is compiled and enriched with additional assembly
information, which will be discussed in more detail in the
following section. It is then transferred to the planning
layer where the reasoning component derives the next
action u* based on the actual environmental state y* and
the desired goal g*. The actual environmental state is
estimated on the basis of sensor readings from a technical
application system (TAS). In the coordination layer the raw
sensor readings y are fused and aggregated into an envi-
ronmental state y*. Hence, all decisions in the planning
layer are based on the environmental state y* at a given
time. The decision process must therefore be short, because
the state of the TAS may have changed significantly. The
next best action u* derived in the planning layer is sent
back to the coordination layer, where the abstract
description of the next best action u* is translated into a
sequence of actuator commands u, which are sent to the
TAS. In the TAS, the sequence of commands is executed
and the changed environmental state is measured again by
the sensors. If the new vector y of sensor readings indicates
an emergency situation, the reactive layer processes the
sensor data directly and sends the corresponding actuator
commands to the TAS.
4.1 Development of the reasoning component
As shown by Mayer et al. [19], it is crucial for the human
operator to understand the subgoals and the planned actions
of the CCU to supervise the robotic assembly cell. This
raises the question of how the symbolic representation of
the knowledge base of the CCU must be designed to ensure
conformity with the operators expectations. Proprietary
programming languages that are used in conventional
automation have to be learned for each domain and do not
necessarily match the mental model of the human operator.
In terms of a human-centered description for matching the
procedural knowledge to the mental model, one promising
approach is the use of motion descriptors, since motions are
familiar to the human operator from manual assembly
tasks. These motions are also easy to anticipate in human-
robot interaction. In mass production it is best practice to
break down complex tasks into fundamental motion ele-
ments. To do so, the MTM method [20] as a library of
fundamental movements is often used in industry. This
method was chosen to define the motion descriptors that
can be used by the CCU to plan and execute the robotic
assembly processes also used in small lot production [21].
Based on this concept, we followed the so-called Cog-
nitive Process method (CP method [14]). This method is
able to integrate software engineering and cognitive sys-
tems engineering. To do so, the structure of the behavioral
model is retained and the software code is developed on the
basis of a cognitive process. The a priori knowledge that is
needed to control the assembly cell was implemented in
SOAR following the four steps of the static model of the
CP method. Moreover, in the actual executable it is pos-
sible that a production rule contains elements that can be
Fig. 2 Software architecture ofthe cognitive system [17]
426 Prod. Eng. Res. Devel. (2011) 5:423431
123
related to the different steps in the CP process. The a priori
knowledge of the reasoning component consists of a set of
42 production rules.
4.1.1 Achievable model
First, the achievable model for the cognitive system has to
be defined as a desired goal, for all further actions depend
on this model. The desired goal in terms of the product to
be assembled is specified using a CAD software package.
In our particular scenario, the desired goal is the buildup of
an arbitrary structure of LEGO bricks, e.g. a pyramid of
identical bricks. Since SOARs internal representation is
solely symbolic, the desired goal has to be compiled within
the presentation layer to meet formal requirements. Addi-
tionally, the desired goal is not only compiled but enriched
with meta-information. This meta-information can be seen
as the key to our concept of cognitive automation. Besides
information on position and rotation of each brick in the
desired goal, information about the relations of each brick
to its adjacent neighbors is included in the compiled
desired goal. We call these relations neighborhood rela-
tions. The neighborhood relations are solely symbolic. In
other words, if two bricks are nearest neighbors, we only
know about the fact and the direction of the neighboring
relationship. We do not know about a possible overlap in
Cartesian space. The achievable goal as used in SOAR
contains position, rotation, color, type and the neighboring
relations of each brick in the product to be assembled.
4.1.2 Procedural model
In the second step the knowledge about procedures to
achieve the desired goal has to be considered. Based on the
neighboring relationship of the achievable model and
additional constraints, the buildup of the product is planned
by elaboration rules. For example, a brick can only be
positioned if it is on the ground or if all of its neighbors
below have already been assembled.
When using SOAR as a cognitive simulation model, one
also has to consider the operational model for designing the
procedural model according to SOARs execution cycle
[22]. For all operations that should later be executed in the
application phase of SOAR, procedures have to be pro-
posed that fire in the proposal phase of the execution
cycle. Rules that propose a motion are part of the proce-
dural model but are strongly connected to rules that apply
the motion.
4.1.3 Operational model
The operational model puts the generated plans of the
procedural model into action. The basic fundamental
movements of the MTM-1 system were used to control
the movement of the robot on the linear track (see
Fig. 1). These movements are encoded as production
rules in the operational model. The motion operators are
REACH, GRASP, MOVE (including TURN), POSITION
and RELEASE (including APPLY PRESSURE). A par-
ticular rule can only be applied if the corresponding
motion operator was selected by the procedural model.
The five motion operators are the only action primitives
in this scenario that can manipulate the assembly.
4.1.4 Environmental model
In the fourth step of the CP-method all elements that are
needed in the previous steps have to be mapped onto an
environmental model that can be used by the CCU. In the
developed scenario the gripper of the robot, the conveyer
belt, the brick feeder, the working area and the buffer are
modeled. These elements are transmitted to the cognitive
simulation model during initialization along with the goal
state.
4.2 Integration into assembly cell
Manufacturing systems like the experimental assembly
cell require robust, real-time-capable control hardware.
Although PC-based hardware and software is often used
for human supervisory control and for high level con-
trollers, embedded systems with real-time operating sys-
tems prevail as machine controllers. In the assembly cell
robot controllers supplied by the robot manufacturer are
used to control the handling robots. A motion controller
controls the conveyer belt as well as the track switches.
Additionally, a PC-based controller is connected to a
hand-like robot gripper with three fingers. Each controller
is able to execute control programs to perform movements
of the attached components and to interact with other
controllers via field bus or internet protocols. Action
primitives covering the basic features of the robots, con-
veyor belts, track switches and grippers were imple-
mented on the controllers to be remotely activated. These
action primitives, as well as the input signals from the
sensors, were made available to the reactive layer of the
CCU.
The assembly process requires the incoming LEGO
bricks to be identified and grasped before they can be
assembled in the working area. A computer vision system
with one camera is therefore used to detect the shape and
orientation of the bricks. The information it gathers is used
to track the bricks and to grasp one brick in real-time from
the moving conveyer belt. The real-time coupling between
the vision system and the robot is coordinated by the
reactive layer.
Prod. Eng. Res. Devel. (2011) 5:423431 427
123
The reactive layer connects the TAS to the high-level
cognitive functions introduced. However, the reactive layer
must also provide capabilities for real-time reaction to
safetycritical events. In order to be able to ensure mini-
mum response time, conventional compiled software
code is used. This poses no restriction on higher cognitive
functions since the rule-based behavior is determined by
the layers above the reactive layer. Actuator commands
received from the super-ordinate coordination layer are
either interpreted by and executed within the reactive layer
or passed to the TAS, where the robots, the motion con-
trollers or PC-based gripper controllers execute them.
Sensor readings from the TAS are also either passed on to
the coordination layer or processed within the reactive layer.
For instance, video streams from the camera are usually too
complex to be interpreted by SOAR at a symbolic level. In
this case the video streams are processed in the reactive
layer and only the extracted image information about the
bricks size and shape are transmitted to higher layers.
5 System evaluation
5.1 Reasoning component
In the following, only simulation results regarding the
reasoning component of the CCU are presented due to space
limitations. The depending variables in the simulation study
are the processing time and the number of required pick and
place operations (termed MTM-1 cycles).
To evaluate the effect of the independent variables
on the dependent variables, we carried out independent
simulation runs for workpieces assembled from identical
bricks. The independent variables are (1) size of the
product to be assembled (six levels: four to 24 bricks in
steps of four), (2) number of bricks provided at the queue
(seven levels: one, four to 24 in steps of four) and feeding
regime (two levels: deterministic supply of needed bricks
and random supply including unneeded bricks). For each
combination of the levels of the independent variables 100
simulation runs were calculated. Self-developed simulation
software was used. The runs were scheduled for parallel
processing on the high-end Compute Cluster in the Center
for Computing and Communication at RWTH Aachen
University.
The simulation results show that the desired target state
was assembled correctly by the CCU in all 8,400 runs.
Assembly errors or deadlocks did not occur. Regarding the
number of required MTM-1 cycles for a workpiece of a
given size and a queue of a given length, all simulated
sequences conform to expected number of cycles. This is
shown in Fig. 3 for both feeding regimes.
The corresponding results for processing time are shown
in Fig. 4. The simulation results unambiguously show a
disproportional increase in processing time with increasing
part size and queue length for deterministic part feed.
Conversely, a stochastic part feed surprisingly leads to a
decrease in processing time over the queue length. This
counter-intuitive result can be explained by the way SOAR
processes production rules: Each needed brick in the queue
is matched to all possible positions within the target state.
Positive matches lead to proposals that have to be com-
pared. Hence, for deterministic part feed the amount of
comparisons increases disproportionally due to the known
exponential worst-case runtime behavior of SOARs
embedded RETE algorithm.
5.2 Design for humanmachine compatibility
In order to be able to use the full potential of cognitive
automation, one ultimately has to expand the focus from a
traditional humanmachine system to joint cognitive
Fig. 3 Required MTM-1 cyclesof the reasoning component of
the CCU as a function of part
size and number of bricks
available at the queue (leftdeterministic brick feed; rightstochastic brick feed)
428 Prod. Eng. Res. Devel. (2011) 5:423431
123
systems [23, 24]. In these systems both the human operator
and the cognitive technical system cooperate safely and
effectively at different levels of cognitive control to
achieve a maximum of humanmachine compatibility.
Engineering methods like the presented CP method
[3, 14] primarily aim at technical design of cognitive sys-
tems. When developing joint cognitive systems that have to
conform to operator expectations, it is important to acquire
additional knowledge about the rules and heuristics
humans use in manual assembly.
To do so, two independent experimental trials with a
total of 36 subjects were carried out. Based on the data
three fundamental assembly heuristics could be identified
and validated [25]: (1) humans begin an assembly at edge
positions of the working area; (2) humans prefer to build in
the vicinity of neighboring objects; (3) humans prefer to
assemble in layers.
To develop a humanoid mode for cognitively auto-
mated assembly systems similar to the horse-metaphor
for automated vehicles [26], the identified assembly
heuristics where formulated as production rules. When the
reasoning component is enriched with these rules, a sig-
nificant increase in the predictability of the robot when
assembling the products can be achieved [19]. In other
words, if the knowledge base is extended by the rules and
heuristics humans use, the system can be better anticipated
by the human operator because it is compatible with his/her
mental model of the assembly process. Hence, an increase
in predictability leads to more intuitive human-robot
cooperation and therefore increases safety significantly.
6 Summary and outlook
Especially in highly automated manufacturing systems that
are aiming at producing products in almost any variety in
product space, an increase in conventional automation will
not necessarily lead to a significant increase in productiv-
ity. Therefore, novel concepts towards proactive, agile and
versatile manufacturing systems have to be developed.
Cognitive automation is a promising approach to improve
proactive system behavior and agility. In cognitively
automated systems, the experienced machine operator
plays a key architectural role as a competent solver of
complex planning and diagnosis problems. Moreover, he/
she is supported by cognitive simulation models which can
quickly, efficiently and reliably solve algorithmic problems
on a rule-based level of cognitive control and take over dull
and dangerous tasks.
A very interesting finding is that the system is especially
efficient for stochastic part feed with a large variety in
product space. The CCU is therefore able not only to
reduce planning effort with autonomous assembly planning
but also to reduce preparatory work in logistics.
To be able to accomplish complex assembly tasks
without impairing the CCU with calculations that cannot be
solved in polynomial time, future investigations will focus
on a hybrid approach [27] where the predefined planning
problem is solved prior to the assembly by generating a
state graph [28] that describes all possible assembly
sequences for the intended product. This graph can also be
updated during assembly. The reasoning component within
SOAR uses this state graph to adapt the plan to the actual
state of the assembly and part supply.
To assist the human operator within this novel auto-
mation concept, additional laboratory studies of a self-
developed augmented vision system for error detection in
assembled parts were carried out [29]. However, the aug-
mented vision system has to be extended by a real-time
decision-support function based on SOAR.
In order to validate the introduced concepts and proto-
types, future investigations also have to focus on real
Fig. 4 Processing time in [s] ofthe reasoning component of the
CCU as a function of part size
and number of bricks available
at the queue (left deterministicbrick feed; right stochastic brickfeed)
Prod. Eng. Res. Devel. (2011) 5:423431 429
123
industrial products. As stated before, many industrially
processed components allow only a few procedural varia-
tions. Hence, cognitive automation can be like breaking a
butterfly on a wheel. Therefore, as a first step, a modular
model of an engine was developed (see Fig. 5). The model
allows for arbitrary sequences of the assembly process but
provides sufficient complexity to demonstrate the flexibil-
ity and effectiveness of cognitive automation.
Acknowledgments The authors would like to thank the GermanResearch Foundation (DFG) for its kind support of the research on
cognitive automation within the Cluster of Excellence IntegrativeProduction Technology for High-Wage Countries.
References
1. Zhang Z, Sharifi H (2000) A methodology for achieving agility in
manufacturing organizations. Int J Oper Prod Manage
20(4):496512
2. Kinkel S, Friedwald M, Husing B, Lay G, Lindner R (2008)
Arbeiten in der Zukunft, Strukturen und Trends der Industriear-
beit. Studien des Buros fur Technikfolgen-Abschatzung bei De-
utschen Bundestag, 27th edn. Sigma, Berlin (in German)
3. Onken R, Schulte A (2010) System-ergonomic design of cogni-
tive automation. Studies in computational intelligence, vol 235.
Springer, Berlin
4. Rasmussen J (1986) Information processing and human-machine
interaction. An Approach to Cognitive Engineering, North-
Holland
5. Wiendahl HP, ElMaraghy HA, Nyhuis P, Zah MF, Wiendahl HH,
Duffie N, Brieke M (2007) Changeable manufacturing. Classifi-
cation, design and operation. Ann CIRP 56(2):783809
6. Eversheim W (1998) Organisation in der Produktionstechnik, Bd.
Konstruktion. Springer, Berlin
7. Kempf T, Herfs W, Brecher C (2008) Cognitive control tech-
nology for a self-optimizing robot based assembly cell. In:
Proceedings of the ASME 2008 international design engineer-
ing technical conferences & computers and information in engineer-
ing conference, America Society of Mechanical Engineers, US
8. Chong HQ, Tan AH, Ng GW (2007) Integrated cognitive archi-
tectures: a survey. Artif Intell Rev 28:103130
9. Anderson JR, Bothell D, Byrne MD, Douglass S, Lebiere C, Qin
Y (2004) An integrated theory of the mind. Psychol Rev
111:10361060
10. Langley P, Cummings K, Shapiro D (2004) Hierarchical skills
and cognitive architectures. In: Proceedings of the twenty-sixth
annual conference of the cognitive science society. Chicago
11. Lehman J, Laird J, Rosenbloom P (2006) A gentle introduction to
soar, an architecture for human cognition: 2006 update. Retrieved
17 May 2010 from http://ai.eecs.umich.edu/soar/sitemaker/docs/
misc/GentleIntroduction-2006.pdf
12. Jones RM, Laird JE, Nielsen PE, Coulter KJ, Kenny P, Koss FV
(1999) Automated intelligent pilots for combat flight simulation.
AI Magazine 20:2741
13. Stensrud B, Taylor G, Crossman J (2006) IF-Soar: a virtual,
speech-enabled agent for indirect fire training. In: Proceedings of
the 25th army science conference, Orlando, FL
14. Putzer HJ (2004) Ein uniformer Architekturansatz fur kognitive
Systeme und seine Umsetzung in ein operatives Framework.
Koster, Berlin (in German)
15. Janrathitikarn O, Long LN (2008) Gait control of a six-legged
robot on unlevel terrain using a cognitive architecture. In: Pro-
ceedings of the IEEE aerospace conference
16. Laird JE, Yager ES, Hucka M, Tuck CM (1991) Robo-soar: an
integration of external interaction, planning, and learning using
soar. Robot Auton Syst 8:113129
17. Hauck E, Ewert D, Schilberg D, Jeschke S (2010) Design of a
knowledge module embedded in a framework for a cognitive
system using the example of assembly tasks. In: Proceedings of
the 3rd international conference on applied human factors and
ergonomics. Taylor & Francis, Miami
18. Gat E (1998) On three-layer architectures. In: Kortenkamp D,
Bonnasso R, Murphy R (eds) Artificial intelligence and mobile
robots, pp 195211
19. Mayer M, Odenthal B, Faber M, Kabu W, Kausch B, Schlick C
(2009) Simulation of human cognition in self-optimizing
assembly systems. In: Proceedings of 17th world congress on
ergonomics IEA 2009. Beijing
20. Maynard HB, Stegemerten GJ, Schwab JL (1948) Methods-time
measurement. McGraw-Hill, London
21. Mayer M, Odenthal B, Grandt M, Schlick C (2008) Task-oriented
process planning for cognitive production systems using MTM,
In: Karowski W, Salvendy G (eds) Proceedings of the 2nd
international conference on applied human factors and ergonomic
(AHFE). USA Publishing, USA
22. Laired JE, Congdon CB (2006) The soar users manual version
8.6.3
23. Hollnagel E, Woods DD (2005) Joint cognitive systems: foun-
dations of cognitive systems engineering. Taylor & Francis
Group, Boca Raton
24. Norros L, Salo L (2009) Design of joint systems: a theoretical
challenge for cognitive system engineering. Cogn Tech Work
11:4356
25. Mayer M, Odenthal B, Faber M, Kabu W, Jochems N, Schlick C
(2010) Cognitive engineering for self-optimizing assembly sys-
tems. In: Karwowski W, Salvendy G (eds) Advances in human
factors, ergonomics, and safety in manufacturing and service
industries. CRC Press, USA
26. Flemisch FO, Adams CA, Conway SR, Goodrich KH,
Palmer MT, Schutte PC (2003) The H-metaphor as a guideline
for vehicle automation and interaction. NASA/TM2003-
212672
Fig. 5 Modular engine model
430 Prod. Eng. Res. Devel. (2011) 5:423431
123
27. Ewert D, Mayer M, Kuz S, Schilberg D, Jeschke S (2010) A
hybrid approach to cognitive production systems. In: Proceedings
of the 2010 international conference on intelligent robotics and
applications (ICIRA 2010) Shanghai, China
28. Zaeh MF, Wiesbeck M (2008) A model for adaptively generating
assembly instructions using state-based graph. In: Mitsuishi M,
Ueda K, Kimura F (eds) Manufacturing systems and technologies
for the new frontier. Springer, Berlin
29. Odenthal B, Mayer M, Kabu W, Kausch B, Schlick C (2009) An
empirical study of assembly error detection using an augmented
vision system. In: Virtual and mixed reality, VMR 2009, held as
part of HCI international 2009. San Diego
Prod. Eng. Res. Devel. (2011) 5:423431 431
123
Top Related