Showing posts with label high. Show all posts
Showing posts with label high. Show all posts

3D Abstract High Definition HD Wallpapers for Computer

Abstract 3d green wallpaper3d Digital Abstract Nature WallpaperAbstract orange frames wallpaper

3d Nature Tree Abstract WallpaperAbstract 3d Lake wallpaper3d Pink abstract star wallpaper

Abstract Forest water Wallpaper3d HD Abstract Desktop BackgroundGreen Sphere Desktop Background wallpaper

Abstract WallpapersAbstract forest wallpaperAbstract flower

Abstract Winter WallpaperAbstract Star WallpaperAbstract hd Wallpaper
Read More..

Download a set of 100 high resolution wallpapers for your desktop

High resolution wallpapers for desktop
This is a set of high resolution wallpapers including the above wallpapers and more.
Click to download the wallpapers.
Read More..

High Resolution Scary Haunted House Wallpapers for Desktop

To save the image, first click to enlarge the image, then right-click on it and click on Save Image As.. or simply right-click on the thumbnail and click on Save Link AS..
Note: all are high resolution images greater than 1024 x 728 px.




Danger Cemetry wallpaperHaunted House Wallpaperhalloween wallpaper


Scary Haunted House wallpaperhaunted house blue scaryHigh Definition Scary Haunted house wallpaper


A_Haunted_HalloweenScary haunted house hd wallpaperHaunted house hd wallpapers for desktop
Read More..

High Definition Scary Halloween Desktop Wallpaper

Horror WallpaperHD Halloween desktop wallpaperdark halloween wallpaper
Happy Halloween WallpaperScary jack o lantern wallpaperHalloween desktop wallpaper
hd halloween wallpaperHigh resolution halloween wallpaperScary Dark Halloween wallpaper
Halloween WallpaperHalloween sacery wallpaperHappy halloween desktop wallpaper
HD Scary wallpaperscary desktop wallpaperHalloween house wallpaper
Read More..

HDR Low Light and High Dynamic Range photography in the Google Camera App



As anybody who has tried to use a smartphone to photograph a dimly lit scene knows, the resulting pictures are often blurry or full of random variations in brightness from pixel to pixel, known as image noise. Equally frustrating are smartphone photographs of scenes where there is a large range of brightness levels, such as a family photo backlit by a bright sky. In high dynamic range (HDR) situations like this, photographs will either come out with an overexposed sky (turning it white) or an underexposed family (turning them into silhouettes).

HDR+ is a feature in the Google Camera app for Nexus 5 and Nexus 6 that uses computational photography to help you take better pictures in these common situations. When you press the shutter button, HDR+ actually captures a rapid burst of pictures, then quickly combines them into one. This improves results in both low-light and high dynamic range situations. Below we delve into each case and describe how HDR+ works to produce a better picture.

Capturing low-light scenes

The camera on a smartphone has a small lens, meaning that it doesnt gather much light. If a scene is dimly lit, the resulting photograph will contain image noise. One solution is to lengthen the exposure time - how long the sensor chip collects light. This reduces noise, but since its hard to hold a smartphone perfectly steady, long exposures have the unwanted side effect of blurring the shot. Devices with optical image stabilization (OIS) sense this "camera shake” and shift the lens rapidly to compensate. This allows longer exposures with less blur, but it can’t help with really dark scenes.

HDR+ addresses this problem by taking a burst of shots with short exposure times, aligning them algorithmically, and replacing each pixel with the average color at that position across all the shots. Averaging multiple shots reduces noise, and using short exposures reduces blur. HDR+ also begins the alignment process by choosing the sharpest single shot from the burst. Astronomers call this lucky imaging, a technique used to reduce the blurring of images caused by Earths shimmering atmosphere.
A low light example is captured at dusk. The picture at left was taken with HDR+ off and the picture at right with HDR+ on. The HDR+ image is brighter, cleaner, and sharper, with much more detail seen in the subject’s hair and eyelashes. Photos by Florian Kainz
Capturing high dynamic range scenes

Another limitation of smartphone cameras is that their sensor chips have small pixels. This limits the cameras dynamic range, which refers to the span between the brightest highlight that doesnt blow out (turn white) and the darkest shadow that doesnt look black. One solution is to capture a sequence of pictures with different exposure times (sometimes called bracketing), then align and blend the images together. Unfortunately, bracketing causes parts of the long-exposure image to blow out and parts of the short-exposure image to be noisy. This makes alignment hard, leading to ghosts, double images, and other artifacts.

However, bracketing is not actually necessary; one can use the same exposure time in every shot. By using a short exposure HDR+ avoids blowing out highlights, and by combining enough shots it reduces noise in the shadows. This enables the software to boost the brightness of shadows, saving both the subject and the sky, as shown in the example below. And since all the shots look similar, alignment is robust; you won’t see ghosts or double images in HDR+ images, as one sometimes sees with other HDR software.
A classic high dynamic range situation. With HDR+ off (left), the camera exposes for the subjects’ faces, causing the landscape and sky to blow out. With HDR+ on (right), the picture successfully captures the subjects, the landscape, and the sky. Photos by Ryan Geiss
Our last example illustrates all three of the problems we’ve talked about - high dynamic range, low light, and camera shake. With HDR+ off, a photo of Princeton University Chapel (shown below) taken with Nexus 6 chooses a relatively long 1/12 second exposure. Although optical image stabilization reduces camera shake, this is a long time to hold a camera still, so the image is slightly blurry. Since the scene was very dark, the walls are noisy despite the long exposure. Therefore, strong denoising is applied, causing smearing (below, left inset image). Finally, because the scene also has high dynamic range, the window at the end of the nave is blown out (below, right inset image), and the side arches are lost in darkness.
Click here to see the full resolution image. Photo by Marc Levoy
HDR+ mode performs better on all three problems, as seen in the image below: the chandelier at left is cleaner and sharper, the window is no longer blown out, there is more detail in the side arches, and since a burst of shots are captured and the software begins alignment by choosing the sharpest shot in the burst (lucky imaging), the resulting picture is sharp.
Click here to see the full resolution image. Photo by Marc Levoy
Heres an album containing these comparisons and others as high-resolution images. For each scene in the album there is a pair of images captured by Nexus 6; the first was was taken with HDR+ off, and the second with HDR+ on.

Tips on using HDR+

Capturing a burst in HDR+ mode takes between 1/3 second and 1 second, depending on how dark the scene is. During this time youll see a circle animating on the screen (left image below). Try to hold still until it finishes. The combining step also takes time, so if you scroll to the camera roll right after taking the shot, youll see a thumbnail image and a progress bar (right image below). When the bar reaches 100%, your HDR+ picture is ready.
Should you leave HDR+ mode on? We do. The only times we turn it off are for fast-moving sports, because HDR+ pictures take longer to capture than a single shot, or for scenes that are so dark we need the flash. But before you turn off HDR+ for these action shots or super-dark scenes, give it a try; we think youll be surprised how well it works!

At this time HDR+ is available only on Nexus 5 and Nexus 6, as part of the Google Camera app.

Read More..

High Quality Object Detection at Scale



Update - 26/02/2015
We recently discovered a bug in the evaluation methodology of our object detector. Consequently, the large numbers we initially reported below are not realistic, due to the fact that our separately trained context extractor was contaminated with half of the validation set images. Therefore, our initial results were overly optimistic and were not attainable by the methodology described in the paper. Re-evaluating our initial results, we have restricted ourselves to reporting only the single-model results on the other half of the dedicated validation set without retraining the models. With the updated evaluation, we are still able to report the best single-model result on the ILSVRC 2014 detection challenge data set, with 0.43 mAP when combining both Selective Search and MultiBox proposals with our post-classification model. The original draft of our paper "Scalable, High Quality Object Detection" has been updated to reflect this information. We are deeply sorry if our initial reported results caused any confusion in the community. Original post follows below. 
-C. Szegedy, S. Reed, D. Erhan, and D. Anguelov

The ILSVRC detection challenge is an influential academic benchmark for measuring the quality of object detection. This summer, the GoogLeNet team reported top results in the 2014 edition of the challenge, with ~2X improvement over the previous year’s best results. However, the quality of our results came at a high computational cost: processing each image took about two minutes on a state-of-the-art workstation.

Naturally, we began to think of how we could both improve the accuracy and reduce the computation time needed. Given the already high quality of previous results like those of GoogLeNet[6], we expected that further improvements to detection quality would be increasingly hard to achieve. In our recent paper Scalable, High Quality Object Detection[7], we detail advances that instead have resulted in an accelerated rate of progress in object detection:
Evolution of detection quality over time. On the y axis is the mean average precision of the best published results at any given time. The blue line shows result using individual models, the red line is multi-model ensembles. Overfeat[8] was the state-of-the-art at end of last year, followed by R-CNN[1] published in May. The later measurement points are the results of our team.[6,7]
As seen in the plot above, the mean average precision has been improved since August from 0.45 to 0.56: a 23% relative gain. The new approach can also match the quality of the former best solution with 140X reduced computational resources.

Most current approaches for object detection employ two phases[1]: in the first phase, some hand-engineered algorithm proposes regions of interest in the image. In the second phase, each proposed region is run through a deep neural network, identifying which proposed patches correspond to an object (and what that object is).

For the first phase, the common wisdom[1,2,3,4] was that it took skillfully crafted code to produce high quality region proposals. This has come with a drawback though: these methods don’t produce reliable scoring for the proposed regions. This forces the second phase to evaluate most of the proposed patches in order to achieve good results.

So we revisited our prior “MultiBox” work[5], in which we let the computer learn to pick the proposals to see whether we could avoid relying on any of the hand-crafted methods above. Although the MultiBox method, using previous generation vision network architectures, could not compete with hand-engineered proposal approaches, there were several advantages of fully relying on machine learning only. First, the quality of proposals increases with each new improved network architecture or training methodology without additional programming effort. Second, the regions come with confidence scores which are used for trading off running time versus quality. Additionally, the implementation is simplified.

Once we used new variants of the network architecture introduced in [6], MultiBox also started to perform much better; Now, we could match the coverage of alternative methods with half as many proposal patches. Also, we changed our networks to take the context of objects into account, fueling additional quality gains for the second phase. Furthermore, we came up with a new way to train deep networks to learn more robustly even when some objects are not annotated in the training set, which improved both phases.

Besides the significant gains in mean average precision, we can now cut the number of evaluated patches dramatically at a modest loss of quality: the task that used to take 2 minutes of processing time for a single image on a workstation by the GoogLeNet ensemble (of 6 networks), is now performed under a second using a single network without using GPUs. If we constrain ourselves to a single category like “dog”, we can now process 50 images/second on the same machine by a more streamlined approach[7] that skips the proposal generation step altogether.

As a core area of research in computer vision, object detection is used for providing strong signals for photo and video search, while high quality detection could prove useful for self-driving cars and automatically generated image captions. We look forward to the continuing research in this field.

References:

[1]  Rich feature hierarchies for accurate object detection and semantic segmentation
by Ross Girshick and Jeff Donahue and Trevor Darrell and Jitendra Malik (CVPR, 2014)

[2]  Prime Object Proposals with Randomized Prim’s Algorithm
by Santiago Manen, Matthieu Guillaumin and Luc Van Gool

[3]  Edge boxes: Locating object proposals from edges
by Lawrence C Zitnick, and Piotr Dollàr (ECCV 2014)

[4]  BING: Binarized normed gradients for objectness estimation at 300fps
by Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin and Philip Torr (CVPR 2014)

[5]  Scalable Object Detection using Deep Neural Networks
by Dumitru Erhan, Christian Szegedy, Alexander Toshev, and Dragomir Anguelov

[6]  Going deeper with convolutions
by Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke and Andrew Rabinovich

[7]  Scalable, high quality object detection
by Christian Szegedy, Scott Reed, Dumitru Erhan and Dragomir Anguelov

[8]  OverFeat: Integrated Recognition, Localization and Detection using Convolutional Network by Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus and Yann LeCun


* A PhD student at University of Michigan -- Ann Arbor and Software Engineering Intern at Google?
Read More..

Scary high resolution desktop Wallpapers

To save the image, first click to enlarge the image, then right-click on it and click on Save Image As.. or simply right-click on the thumbnail and click on Save Link AS..
Note: all are high resolution images greater than 1024 x 728 px.




Haunted house with flying batsHaunted house with curve moonHaunted house with scary moon

Haunted house inside scary jungleHaunted house with broken gateHaunted house at night

Haunted housescary haunted housescary haunted house
Read More..