A new computerised learning system spots emotional sentiments, such as sarcasm and irony, in text messages and emails and it could even detect content that suggests suicidal ideations.
Eden Saig, a computer science student at the Technion-Israel Institute of Technology in Israel, developed the computerised learning system which works by recognising repeated word patterns.
Saig developed the system at the Technion’s Learning and Reasoning Laboratory, after taking a course in artificial intelligence supervised by Professor Shaul Markovich, of the Technion Faculty of Computer Science.
According to Saig, voice tone and inflections play an important role in conveying one’s meaning in verbally communicated message.
In text and email messages, those nuances are lost and writers who want to signify sarcasm, sympathy or doubt have taken to using images, or “emoticons,” such as the smiley face, to compensate.
“These icons are superficial cues at best. They could never express the subtle or complex feelings that exist in real life verbal communication,” said Saig.
Recently, pages intended to be humourous on social networks such as Facebook and Twitter were titled “superior and condescending people,” or “ordinary and sensible people.”
Such pages are very popular in Israel, said Saig, and users are invited to submit suggestions for phrases that can be labelled as ‘stereotypical sayings,’ for that particular page.
By observing posts to these groups, Saig identified existing patterns. The method he developed enables the system to detect future patterns on any social network.
Since the content in these sections was colloquial, everyday language, Saig realised that, “the content could provide a good database for collecting homogeneous data that could, in turn, help ‘teach’ a computerised learning system to recognise patronising sounding semantics or slang words and phrases in text.”
Saig applied ‘Machine-Learning’ algorithms to the content on these pages and used the results to automatically identify stereotypical behaviours found every day in social network communication.
The quantification was carried out by examining 5,000 posts on social media pages and, through statistical analysis, gearing a learning system to recognise content structure that could be identified as condescending or slang.
The system was constructed to identify key words and grammatical habits that were characteristic of sentence structure implied by the content’s sentiments.
“Now, the system can recognise patterns that are either condescending or caring sentiments and can even send a text message to the user if the system thinks the post may be arrogant,” said Saig.
When applied to other networking pages it may help detect content that suggests suicidal ideations, for example, or ‘calls’ for help, or expressions of admiration or pleasure, Saig said.
PTI