Showing posts with label worth. Show all posts
Showing posts with label worth. Show all posts

How a Victorian mathematics don became a digital pioneer

"With a steely glare, a starched collar and a pair of truly prodigious sideburns, he is the digital pioneer you have almost certainly never heard of. Now, 200 years after his birth, George Boole is finally to get the acclaim he deserves." So begins a recent article in The Guardian. Well of course, if youre a computer scientist, you almost certainly have heard of George Boole; he after all was the creator of Boolean logic. However, thats probably all you know so there will be much about him that youre not aware of. As with Charles Babbage, Ada Lovelace, and of course Alan Turing its good to see these formerly obscure digital pioneers getting the public recognition they so deserve. Perhaps as our society becomes more and more dependent on computers were starting to realise that these people are every bit as important as Galileo, Newton, Darwin, Faraday,...

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Turn off or edit this Recipe

Read More..

Computer junk worth thousands!

This is a great story reported in the LA Times; a woman whose husband had recently died decided to clean out their garage. She found several boxes of computer junk, old circuit boards, keyboards, you know the sort of stuff. Doing the right thing she took it all to a local e-waste recycling centre in Silicon Valley. Some time later when the recyclers were sorting through the boxes though found an Apple I computer. They auctioned this very rare machine for $200,000 and are now trying to locate the woman to give her half the money. So if you have a father, grandfather or uncle with what looks like computer junk perhaps you should have a careful sort through it, you never know what you may find.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Delete or edit this Recipe

Read More..

A picture is worth a thousand coherent words building a natural description of images



“Two pizzas sitting on top of a stove top oven”
“A group of people shopping at an outdoor market”
“Best seats in the house”

People can summarize a complex scene in a few words without thinking twice. It’s much more difficult for computers. But we’ve just gotten a bit closer -- we’ve developed a machine-learning system that can automatically produce captions (like the three above) to accurately describe images the first time it sees them. This kind of system could eventually help visually impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images.

Recent research has greatly improved object detection, classification, and labeling. But accurately describing a complex scene requires a deeper representation of what’s going on in the scene, capturing how the various objects relate to one another and translating it all into natural-sounding language.
Automatically captioned: “Two pizzas sitting on top of a stove top oven”
Many efforts to construct computer-generated natural descriptions of images propose combining current state-of-the-art techniques in both computer vision and natural language processing to form a complete image description approach. But what if we instead merged recent computer vision and language models into a single jointly trained system, taking an image and directly producing a human readable sequence of words to describe it?

This idea comes from recent advances in machine translation between languages, where a Recurrent Neural Network (RNN) transforms, say, a French sentence into a vector representation, and a second RNN uses that vector representation to generate a target sentence in German.

Now, what if we replaced that first RNN and its input words with a deep Convolutional Neural Network (CNN) trained to classify objects in images? Normally, the CNN’s last layer is used in a final Softmax among known classes of objects, assigning a probability that each object might be in the image. But if we remove that final layer, we can instead feed the CNN’s rich encoding of the image into a RNN designed to produce phrases. We can then train the whole system directly on images and their captions, so it maximizes the likelihood that descriptions it produces best match the training descriptions for each image.
The model combines a vision CNN with a language-generating RNN so it can take in an image and generate a fitting natural-language caption.
Our experiments with this system on several openly published datasets, including Pascal, Flickr8k, Flickr30k and SBU, show how robust the qualitative results are -- the generated sentences are quite reasonable. It also performs well in quantitative evaluations with the Bilingual Evaluation Understudy (BLEU), a metric used in machine translation to evaluate the quality of generated sentences.
A selection of evaluation results, grouped by human rating.
A picture may be worth a thousand words, but sometimes it’s the words that are most useful -- so it’s important we figure out ways to translate from images to words automatically and accurately. As the datasets suited to learning image descriptions grow and mature, so will the performance of end-to-end approaches like this. We look forward to continuing developments in systems that can read images and generate good natural-language descriptions. To get more details about the framework used to generate descriptions from images, as well as the model evaluation, read the full paper here.
Read More..