Showing posts with label beyond. Show all posts
Showing posts with label beyond. Show all posts

From the Mouse to the Smartphone and Beyond

Its that time of the year again, as the nights draw in the free public Gibbons Lectures in Auckland take place. The first lecture is this Thursday the 30th at 6:00pm for a 6:30pm start. Every year the lecture series has a theme and this year its human computer interaction. The first lecture is by Professor Mark Apperley and titled From the Mouse to the Smartphone and Beyond: tracing the development of human-computer interaction. Click the lecture link for full venue details and if you cant attend the lecture will be streamed live and after the event.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Delete or edit this Recipe

Read More..

Beyond Short Snippets Deep Networks for Video Classification



Convolutional Neural Networks (CNNs) have recently shown rapid progress in advancing the state of the art of detecting and classifying objects in static images, automatically learning complex features in pictures without the need for manually annotated features. But what if one wanted not only to identify objects in static images, but also analyze what a video is about? After all, a video isn’t much more than a string of static images linked together in time.

As it turns out, video analysis provides even more information to the object detection and recognition task performed by CNN’s by adding a temporal component through which motion and other information can be also be used to improve classification. However, analyzing entire videos is challenging from a modeling perspective because one must model variable length videos with a fixed number of parameters. Not to mention that modeling variable length videos is computationally very intensive.

In Beyond Short Snippets: Deep Networks for Video Classification, to be presented at the 2015 Computer Vision and Pattern Recognition conference (CVPR 2015), we1 evaluated two approaches - feature pooling networks and recurrent neural networks (RNNs) - capable of modeling variable length videos with a fixed number of parameters while maintaining a low computational footprint. In doing so, we were able to not only show that learning a high level global description of the video’s temporal evolution is very important for accurate video classification, but that our best networks exhibited significant performance improvements over previously published results on the Sports 1 million dataset (Sports-1M).

In previous work, we employed 3D-convolutions (meaning convolutions over time and space) over short video clips - typically just a few seconds - to learn motion features from raw frames implicitly and then aggregate predictions at the video level. For purposes of video classification, the low level motion features were only marginally outperforming models in which no motion was modeled.

To understand why, consider the following two images which are very similar visually but obtain drastically different scores from a CNN model trained on static images:
Slight differences in object poses/context can change the predicted class/confidence of CNNs trained on static images.
Since each individual video frame forms only a small part of the video’s story, static frames and short video snippets (2-3 secs) use incomplete information and could easily confuse subtle fine-grained distinctions between classes (e.g: Tae Kwon Do vs. Systema) or use portions of the video irrelevant to the action of interest.

To get around this frame-by-frame confusion, we used feature pooling networks that independently process each frame and then pool/aggregate the frame-level features over the entire video at various stages. Another approach we took was to utilize an RNN (derived from Long Short Term Memory units) instead of feature pooling, allowing the network itself to decide which parts of the video are important for classification. By sharing parameters through time, both feature pooling and RNN architectures are able to maintain a constant number of parameters while capturing a global description of the video’s temporal evolution.

In order to feed the two aggregation approaches, we compute an image “pixel-based” CNN model, based on the raw pixels in the frames of a video. We processed videos for the “pixel-based” CNNs at one frame per second to reduce computational complexity. Of course, at this frame rate implicit motion information is lost.

To compensate, we incorporate explicit motion information in the form of optical flow - the apparent motion of objects across a cameras viewfinder due to the motion of the objects or the motion of the camera. We compute optical flow images over adjacent frames to learn an additional “optical flow” CNN model.
Left: Image used for the pixel-based CNN; Right: Dense optical flow image used for optical flow CNN
The pixel-based and optical flow based CNN model outputs are provided as inputs to both the RNN and pooling approaches described earlier. These two approaches then separately aggregate the frame-level predictions from each CNN model input, and average the results. This allows our video-level prediction to take advantage of both image information and motion information to accurately label videos of similar activities even when the visual content of those videos varies greatly.
Badminton (top 25 videos according to the max-pooling model). Our methods accurately label all 25 videos as badminton despite the variety of scenes in the various videos because they use the entire video’s context for prediction.
We conclude by observing that although very different in concept, the max-pooling and the recurrent neural network methods perform similarly when using both images and optical flow. Currently, these two architectures are the top performers on the Sports-1M dataset. The main difference between the two was that the RNN approach was more robust when using optical flow alone on this dataset. Check out a short video showing some example outputs from the deep convolutional networks presented in our paper.


1 Research carried out in collaboration with University of Maryland, College Park PhD student Joe Yue-Hei Ng and University of Texas at Austin PhD student Matthew Hausknecht, as part of a Google Software Engineering Internship?

Read More..

The Internet Is Failing The Website Preservation Test

I blogged a few months back about the increasing problem of dead links (known as link rot) on research web sites. Not surprisingly this problem is not isolated to academic websites. An article by Rob Miller in TechChrunch, called "The Internet Is Failing The Website Preservation Test," makes the case for a a digital Library of Congress to preserve and protect all of the content on the internet to counter this growing problem. Preserving the integrity of the web for posterity cannot he (and others) argue be left to content publishers or non-profits like the Internet Archives Wayback Machine or the British Librarys UK Web Archive. The web is such a vital information source to us all know it must be preserved for future historians.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Delete or edit this Recipe

Read More..

Remembering to forget

With the cost of data storage being so cheap the Internet has the potential to remember everything that is ever posted online, for ever! Whilst historians might relish the idea of a future where the personal details of everyone are searchable centuries back most people like the idea of being able to be forgotten. The European Union is leading the way with legislation that provides a "right to be forgotten" — strictly speaking, a right to have certain kinds of information removed from search engine results. However, in a Kafkaesque twist, my colleague Mark Wilson brought to my attention, the fact that Google in the EU has been ordered to remove links to stories about Google removing links to stories. You can read more about this in this arstechnica story.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Delete or edit this Recipe

Read More..

Beyond Touch using everyday tools as input devices

This Thursdays Gibbons lecture titled Beyond Touch: using everyday tools as input devices is by Dr Beryl Plimmer of the Department of Computer Science at The University of Auckland. The lecture is this Thursday the 7th at 6:00pm for a 6:30pm start. Click the lecture link for full venue details and if you cant attend the lecture will be streamed live and after the event.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Delete or edit this Recipe

Read More..