The MindGarage and Insiders Technologies GmbH are working on Conversational Intelligence (CI) to make machines understand Natural Language as well as humans do. CI includes Chat-Bots to core modules like Named Entity Recognition, Intent Classification, Visual Question Answering, Question Answering and Reasoning, etc.. We deal with all these challenging tasks in our everyday work. To give Conversational Intelligence a lateral perspective and to figure out fresh solutions for them, we are bringing together computer scientists from all around Europe to what we call the Ovation Summer Academy alpha (OSA-alpha). The goal is to come up with innovative ideas to tackle the challenges in Conversational Intelligence, work as a team, program like there is no tomorrow, and build Jarvis by the end (or at least try to :-P). This is a pre announcement for the Ovation Summer Academy alpha. The official announcement will be made tomorrow on the MindGarage homepage.
If you want to quickly apply for the OSA-alpha, you can do it here.
Experience in AI for NLP is a minimum requirement for participating in the event.
Experience in Deep Learning models for NLP is an added advantage. We will provide a generic framework for handelling datasets and boilerplate code for training Deep Learning models. These can be used out of the box and can be easily extended as well. Frameworks that we will support are Tensorflow, Keras, Tflearn, Spacy and NLTK.
Knowledge about the current methods like Word2Vec, Attention Mechanism, Recurrent Neural Networks, etc. Knowledge about all of these methods is not a strict requirement but is a good-to-have.
This year OSA-alpha is going to take place in Berlin, Germany at the innovative Campus (Saarbrücker Str. 36). Insiders Technologies is going to host all the participants of OSA-alpha and all our sessions will take place in the old backery.
As mentioned above, we are going to come up with innovative solutions and tools (demonstrators) for the challenges in Conversational Intelligence and try to build a curated framework for it by using the models that the participants develop in OSA-alpha. The topics that we are going to cover are the following:
Classifying the intent of what a person says. Intent Classifiers are the core of Chat-Bots. For a machine to give a response to a Natural Language query by a human, it needs to understand the intent of the human and once the intent is classified, a response can be generated.
We will cover the following sub-tasks under Intent Classifications which will be the building blocks of an Intent Classifier.
Given pairs of similar and dissimilar sentences, predict the semantic relatedness between them. One possibility would be to develop Machine Learning models to do the prediction and combine it with conventional approaches.
Given a sentence, classify it given a predefined set of classes. Train your models on labelled sentences and perform classification.
Try to perform one-shot learning on a very small dataset to classify sentences.
Once the machine has extracted the intent of a person’s query, it will need to parse this and gather useful information from it. To do so, the machine needs to identify Named Entities from the queries. These Entities are used to reason with the query and generate an answer or select an answer from a given template. For example, if the query is “What is Germany?”, then the machine needs to parse the query into “What is Germany [Country] ?”
This task has two subtasks, which are:
Given a sentence, predict which entities are present in it and where.
Do the same as NER but with extremely fewer data.
Generate or sample an answer given a question. For example, if a person asks “Am I alive?”, then the machine should be able to generate an answer “yes/no” or sample from a set of predefined answers. After understanding the intent of a query and the entities in it, the machine would use a QA model to give an answer to the query.
There are many subtasks under the category of QA and they are the following:
Given a question, sample an answer from a set of predefined answers or generate it.
Given a question and a set of facts related to the question, sample an answer from a set of predefined answers or generate it.
Given a comprehension passage and a question, find the answer in the passage.
Given an image and a question related to the image or objects in the image, sample an answer from a set of predefined answers or generate it.
We focus on the task of Sentiment Classification, where a sentence or phrase has to be classified according to the emotions it expresses. Sentences may express positive or negative emotions about certain elements contained in it. For example, the sentence the battery is bad, but I find it a good cellphone. expresses negative emotions about a small part of the cellphone (namely, its battery), while still expressing positive emotions about the cellphone as a whole.
We say we perform Coarse-Grained Sentiment Classification when there are only two possible classes: Positive or Negative.
On the other hand, the task of Fine-Grained Sentiment Classification is generally related to classifying several types of emotion, such as anger, sadness, disgust or happiness.
Good that you asked! If you are interested in participating on the Ovation Summer Academy alpha, please fill in the application form here. It will be open until September, 1st, 2017. The selected participants will be announced a few days later here in our blog. We are looking forward to see you in Berlin in September!
Marcus Liwicki
General Chair MindGarage Insiders Technologies |
John Gamboa
Co-Chair MindGarage Insiders Technologies |
Ayushman Dash
Co-Chair MindGarage Insiders Technologies |
Muhammad Zeshan Afzal
Scientific Co-Chair MindGarage Insiders Technologies |
---|---|---|---|
Werner Weiss
CEO Insiders Technologies |
Insiders Technologies GmbH
Sponsor |