Gold Sponsors
Array Telepresence Logo   Human Productivity Lab Logo   Ashton Bentley Logo
Silver Sponsors
Bronze Sponsors
Telepresence Options Magazine

Latest Telepresence and Visual Collaboration News:
Full Article:

AMI Project Update - Machine Learning and Multimodal Interaction

May 23, 2008 | John Serrao
ami_3.jpgThe AMI Consortium (http://www.amiproject.org) is composed of a group of research institutes and universities who are collaborating to study how technology can help people "augment" their business meetings. The word "Augment" is, in fact, the first in the official title of the AMI Consortium projects: Augmented Multiparty Interaction, where "multiparty interactions" are synonymous with business meetings.

For the past 5 years, the consortium has conducted research and is now approaching some very exciting milestones.

Many people are already aware of systems which automatically detect spoken words in telephone conversations. Since some of the Consortium members began research in this domain, over a decade ago, significant progress has been achieved. Commercial companies such as IBM, Nuance and NICE Systems commercialize their own technologies for a variety of applications.


The AMI Consortium is one of the world's leading independent centers on the development of conversational speech to text technology. The Consortium's work, unaffiliated with that of the above-mentioned companies, reaches beyond just voice signal processing. It also processes video signals and, combined with the spoken word, so much MORE can be done.


Applications of AMI Technologies

AMI Consortium research contributes to the development of "building blocks," software which can be integrated into complete solutions that then bring value to users. The AMI Consortium technologies can be applied to a wide range of business challenges, but the consortium itself is focused on research objectives and will not be bringing these to market. To illustrate a few of the potential benefits to enterprise customers, there is a full length white paper on the Web which describes the Applications of AMI Technologies.  You can also view a video:



Above: The AMI Consortium's Vision Video conceived in early 2006. It uses an architectural firm's need to work efficiently between two meetings with an important client who has a large and urgent project to illustrate how AMI technologies can change how people work together and alone.

Summaries

One of the early concept applications is the AMI meeting summarizer. Instead of manually writing out the minutes, action items and conclusions/discussion points of a meeting, the summary can be produced automatically.

The AMI Consortium demonstrates there is more to a summary than just a paragraph of "raw" text produced. One can imagine a suite of much richer layouts of information and graphics. For a better, scientific explanation of this idea, see a recent paper created by the DFKI (German Research Centre for Artificial Intelligence) called "A generic layout-tool for summaries of meetings in a constraint-based approach (final draft - pending publication). The paper was written by Sandro Castronovo (sandro.castronovo [at] dfki [dot] de), Jochen Frey (jochen.frey [at] itemis [dot] de), and Peter Poller (peter.poller [at] dfki [dot] de), a team of researchers at the DFKI.

ami_1.jpgIn the past year, the AMI Consortium has explored the use of two metaphors commonly adopted for communicating about topics in both a temporal scheme (e.g. a comic book, or storyboard is a time-based metaphor for portraying how people interact and share over time), and topical scheme (a newsletter or newspaper layout is based on topics captured in the headlines). In the past 6 months, Consortium researchers have taken this new representation of meeting summaries from a concept to something that receives data from other AMI building blocks (the Automatic Speech Recognizer, the Summary Algorithms with weights and metadata on the video/images). This was first done using the AMI Multimodal Meeting Database which is fully annotated. For more information about the Database, available under the Science Commons license for research, please visit their website.

Once the summaries could be generated from the AMI Multimodal Meeting Database, the next step was to be able to get the automatic summarization operational using media which did not come from the internal meeting database. This has been achieved. In February, the entire AMI Automatic annotation and summarization system was used on what scientists call "unconstrained" meeting content (meetings which are not part of the AMI Meeting Database). A storyboard summary was created in minutes based on the data produced by AMI building blocks (the AMI building blocks "crunched" on an hour long meeting for several hours to produce the data).


Content Linking

Another very important area of work for the AMI Consortium is what it calls "Content Linking." This is another example of what can be done when one has a robust speech-to-text algorithm running during a meeting and this is combined with other "modes" of communication.

Many people have noticed how during a meeting it would be helpful to be able to consult some past documents, e-mails, presentations, meeting minutes (or meeting fragments) or any other files in the internal corporate knowledge base.

The problems are people don't do this routinely today due to the fact that it would take too much time and be too distracting to go looking for all the resources which MIGHT be relevant to a topic of discussion in a meeting, DURING the meeting. In addition, the other resources which could be useful for meeting participants are not indexed and organized in a fashion which makes their retrieval and sharing easy.

ami_2.jpg 
With AMI technologies, people can have that service brought to their desktop computers automatically. The AMI Consortium envisions a real time service which runs during the meeting and is continually comparing the topics of the meeting participants (their speech) with keywords of files already in/on the corporate servers. When things are found, the searching system returns to the user, in an interface which we are showing, links to relevant files. The most likely level of relevance is ranked. The links are all "hot" so all the user needs to do is to click on the link for the content to open on the screen.


Mobile applications

One last area of progress in the past 6 months is the Mobile handset meeting avatar. The AMI Consortium continues to expand on its Mobile User Interface for intelligent meetings. Today there are demonstrations are showing how using the mobile handset as a remote terminal a person can request and be granted floor control during a meeting, and can also see, using small avatars in a graphic representation which is updated in real time, what is going on in the remote location. In other words, the video from the remote location is used to generate a simulated (low bandwidth) meeting animation. This is helpful for the remote participant who only has audio, to see who has the focus of attention, to see who is speaking and to give feedback as well.

So, to wrap up this update of activities, the AMI Consortium partners continue to work on some very useful technologies. The AMI building blocks can be used for many different applications. They only need to have access to the content of meetings (past or in progress).


Multimodal Machine Learning specialists

Over the course of the past 5 years and two very sizable research grants from the European Union, the AMI Consortium has involved and integrated the work of approximately 150 graduate students and research assistants, either directly or indirectly. These people are and have been under the direction of approximately 35 senior scientists and faculty distributed across over 10 institutions in 7 countries.

Some commercial companies who partner with AMI, (see the list of COI members: http://www.amiproject.org/vendors/) are interested in expanding their internal research and development groups. In tandem, some of the trained AMI experts are looking for opportunities to work on new or emerging projects which have commercial objectives.

Many of the specialists in Machine Learning in Multimodal Interaction field will be attending the AMI Consortium-sponsored annual conference on the topic. In 2008, MLMI (http://www.mlmi.info) will be held September 8-10 in Utrecht, the Netherlands.

In conjunction with MLMI, the AMI Technology Transfer consultant, Christine Perey, cperey (at) perey (dot) com is organizing meetings on September 10 between those who seek this expertise and people from AMI Consortium and related research labs to discuss potential employment and/or consulting activities.

To learn more about this special opportunity and the AMI Knowledge and KnowHow Transfer programme, please visit:
http://www.amiproject.org/business-portal/ami-knowledge-and-knowhow-transfer









Add New Comment

Telepresence Options welcomes your comments! You may comment using your name and email (which will not be displayed), or you may connect with your Twitter, Facebook, Google+, or DISQUS account.