Humans Vs Machines: ‘Sketch-A-Net’ Interprets Random Illustrations Better Than Humans!

Computer vision experts from Queens Mary, University of London have launched a neural-net software that can comprehend complex scribbled drawings illustrated by children, architects, design creatives,marketing personnel and other scrawlers, much better than humans.

The academic creators of the software “Sketch-a-Net” are proudly flaunting it and claiming that their program brainchild is perhaps the best at solving petty meanings of such ancient scrawls than just regular human beings. It is reported to have an accuracy rate of approximately 75%.

Sketch-a-Net is basically a ‘deep neural network’, as in it imitates how the human mind processes inputs. It collects data from, both the shape of the entity and the sequence in which the strokes were illustrated, to rectify what it is.

“It’s exciting that our computer program can solve the task even better than humans can,” enthuses Timothy Hospedales, a specialist in Neuro Informatics at Queen Mary’s in London. “Sketches are an interesting area to study because they have been used since pre-historic times for communication and now, with the increase in use of touch-screens, they are becoming a much more common communication tool again.”
Get me a child of three in here, I can’t make head or tail of this
As Sketch-a-Net is an application designed to simulate the way human brain, we are told:

The program has the potential of accurately identifying the subject of sketches 74.9 per cent of the time compared to humans that only managed a success rate of 73.1 per cent … the program [also] performed better at determining finer details in sketches. For instance,it was able to successfully distinguish the specific bird variants “seagull”, “flying-bird”, “standing-bird” and “pigeon” with 42.5 per cent accuracy compared to humans that only achieved 24.8 per cent.

Sadly, it looks like Sketch-a-Net cannot be employed pensively to certain things like ancient cave art, finger paintings or and so on. Apparently it attains its polished results benefiting from the fact that it has knowledge of, what sequence the lines in sketches were illustrated, which indicates it is essentially fit for interpreting drawings scrawled on touchscreens, electronic whiteboards and such comparable devices.

Hospedales and his co-workers percieve, such a kit being utilized to perform internet or database browsing by virtue of sketch input (a boon for pornography searches, clearly) or perhaps to correspond mugshots or CCTV pictures to portraits that police artists draw.

Apparently, it is likely, that it will be employed by bemused executives to figure out what on earth the people in marketing were making an effort to convey in their previous presentation, or engineers striving to obtain more information from frantically heavyset imagery created by design departments and architects.

This research will be presented at the 26th British Machine Vision Conference held on 8 September 2015, Tuesday.

Sponsored: Flash Array Deployment: Download the Dummies Guide

facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Leave a Reply

Your email address will not be published. Required fields are marked *