Showing posts with label your. Show all posts
Showing posts with label your. Show all posts

You can change your Windows Password if forget the current password

passw









1st step: go to run then type this code lusrmgr.msc and then enter.
computer tricks














2nd step: open a new window then click users folder 











3rd step: The you can see your windows username 









4th step: follow my image instruction








Read More..

Make Your Windows 7 Genuine by Command Prompt Computer Tips

Make Your Windows 7 Genuine by Command Prompt: Computer Tips













How to make your windows 7 genuine by command prompt. So, follow my image instruction ...Lets Go............

# 1st step: Go to start menu and type cmd and right click on the cmd icon

Windows 7 Genuine1


























# Step 2: then click in "Run as administrator" then press ok
 Windows 7 Genuine2


























# Step 3: Type this command " SLMGR -REARM " then press ok

 Windows 7 Genuine3

# Step 4:  Wait a seconds
 Windows 7 Genuine4



















# Step 5: then successfully complete this task and restart your computer and get your windows genuine,
 Windows 7 Genuine5




Read More..

Topic about How To Prevent Access To Drives From Your Computer only video tutorial

Drives From Your Computer













Hlw everyone, this is really amazing tutorial and you can easily create security in your HardDrive disk after watch this videos..
Read More..

Build your own satellite

Honestly! It seems as if increasingly there is no activity that dedicated amateurs arent willing to have a go at. Formerly, building satellites was done by NASA and specialists like the Jet Propulsion Laboratory but thanks to greatly simplified design, called CubeSat, radio amateurs our building and launching their own communications satel lites into low Earth orbit. They even have their own governing organisation called AMSAT. So if you have a project that needs its own communications satellite why not build your own.



from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Turn off or edit this Recipe

Read More..

Controlling music with your mind

Last year I bought an EEG headset (the Mindwave Mobile) to play with my Raspberry Pi and then ended up putting it down for a while. Luckily, this semester I started doing some more machine learning and decided to try it back out. I thought it might be possible to have it recognize when you dislike music and then switch the song on Pandora for you. This would be great for when you are working on something or moving around away from your computer.

So using the EEG headset, a Raspberry Pi, and a bluetooth module, I set to work on recording some data. I listened to a couple songs I liked and then a couple songs I didnt like with labeled data. The Mindwave gives you the delta, theta, high alpha, low alpha, high beta, low beta, high gamma, and mid gamma brainwaves. It also approximates your attention level and meditation level using the FFT (Fast Fourier Transform) and gives you a skin contact signal level (with 0 being the best and 200 being the worst).

Since I know very little about brainwaves, I cant make an educated decision on what changes to look at to detect this; thats where machine learning comes in. I can use Bayesian Estimation to construct two multivariate Gaussian models, one that represents good music and one that represents bad music.



----TECHNICAL DETAILS BELOW----
We construct the model using the parameters below (where ? is the mean of the data and ? is the standard deviation of the data):










Now that we have the model above for both good music and bad music, we can use a decision boundary to detect what kind of music you are listening to at each data point.





where:






The boundary will be some sort of quadratic (hyper ellipsoid, hyper parabola, etc) and it might look something like below (though ours is a 10 dimensional function):
 

----END TECHNICAL DETAILS----

The result is an algorithm that is accurate about 70% of the time, which isnt reliable enough. However, since we have temporal data, we can utilize that information, and we wait until we get 4 bad music estimations in a row, then we skip the song.

Ive created a short video (dont worry, I skip around so you dont have to watch me listen to music forever) as a proof of concept. Then end result is a way to control what song is playing with only your brainwaves.


This is an extremely experimental system and only works because there are only two classes to choose and it is not even close to good accuracy. I just thought it was cool. Im curious to see if training using my brainwaves will work for other people as well but I havent tested it yet. There is a lot still to refine but its cool to have a proof of concept. You cant buy one of these off the shelf and expect it to change your life. Its uncomfortable and not as accurate as an expensive EEG but it is fun to play with. Now I need to attach one to Google Glass.

NOTE: This was done as a toybox example as fun. You probably arent going to see EEG controlled headphones in the next couple years. Eventually maybe, but not due to work like this.

How to get it working


HERE IS THE SOURCE CODE

I use pianobar to stream Pandora and have a modified version of the control-pianobar.sh control scripts I have put in the github repository below.
I have put the code on Github here but first you need to make sure you have python >= 3.0, bluez, pybluez, and pianobar installed to use it. You will also need to change the home directory information, copy the control-pianobar.sh script to /usr/bin, change the MAC address (mindwaveMobileAddress) in  mindwavemobile/MindwaveMobileRawReader.py to the MAC address of your mindwave mobile device (which I got the python code from here), and run sudo python setup.py install.

I start pianobar with control-pianobar.sh p then I start the EEG program with python control_music.py, it will tell you what it thinks the song is in real time and then will skip it if it detects 4 bad signals in a row. It will also tell you whether the headset is on well enough with a low signal warning.

Thanks to Dr. Aaron Bobick (whose pictures and equations I used), robintibor (whose python code I used), and Daniel Castro (who showed me his code for Bayesian Estimation in python since my implementation was in Matlab).


Consider donating to further my tinkering.


Places you can find me
Read More..

Using your RPi2 for Valentines Day

Thought I would share something cool I did with my Raspberry Pi that others might like for Valentines day.

I basically had a lot of devices sitting around that I realized I could amalgamate together for a good Valentines day surprise for my girlfriend.

First off I had my robotic bartender (bottender), which you can see in my Hackaday projects page.
I modified it so that it would pour wine on command.

Next I had a set of WeMo light switches that you can get here:

Belkin WeMo Light Switch, Wi-Fi Enabled

These are really nicely made. They are easy to install WiFi-enabled and it is easy to interface with them using our custom API.
I found a nice API for the wemo light switches here.
In the end though, I ended up creating a simple shell script API that uses CURL. You can see mine on github here.

I set up my WeMo light switch to control my bedroom fan. Then I sprinkled the top with rose petals.
Connecting this all together, I have a button on my phone that turns the fan on, sprinkling rose petals down, and turns bottender on, pouring two glasses of wine. Resulting in this:


Happy Valentines Day everyone!



Consider donating to further my tinkering since I do all this and help people out for free.


Places you can find me
Read More..

Download a set of 100 high resolution wallpapers for your desktop

High resolution wallpapers for desktop
This is a set of high resolution wallpapers including the above wallpapers and more.
Click to download the wallpapers.
Read More..

Robot Servants Are Going to Make Your Life Easy

...Then Theyll Ruin It.

Well thats the opinion of Evan Selinger in a recent article for Wired. JIBO, the "worlds first family robot" is heralding an age where robots or digital personal assistants anticipate our needs and perhaps even start making decisions for us. Thats where Selinger believes the danger lies. Watch the JIBO promotional video below and make up your mind.


from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Turn off or edit this Recipe

Read More..

Projecting without a projector sharing your smartphone content onto an arbitrary display



Previously, we presented Deep Shot, a system that allows a user to “capture” an application (such as Google Maps) running on a remote computer monitor via a smartphone camera and bring the application on the go. Today, we’d like to discuss how we support the opposite process, i.e., transferring mobile content to a remote display, again using the smartphone camera.

Although the computing power of today’s mobile devices grows at an accelerated rate, the form factor of these devices remains small, which constrains both the input and output bandwidth for mobile interaction. To address this issue, we investigated how to enable users to leverage nearby IO resources to operate their mobile devices. As part of the effort, we developed Open Project, an end-to-end framework that allows a user to “project” a native mobile application onto an arbitrary display using a smartphone camera, leveraging interaction spaces and input modality of the display. The display can range from a PC or laptop monitor, to a home Internet TV and to a public wall-sized display. Via an intuitive, projection-based metaphor, a user can easily share a mobile application by projecting it onto a target display.

Open Project is an open, scalable, web-based framework for enabling mobile sharing and collaboration. It can turn any computer display projectable instantaneously and without deployment. Developers can add support for Open Project in native mobile apps by simply linking a library, requiring no additional hardware or sensors. Our user participants responded highly positively to Open Project-enabled applications for mobile sharing and collaboration.


Read More..

Will a robot take your job

With robotic technology advancing at a pace there is the obvious prospect that robots will start to perform a wider range of services and tasks and not be limited to factory manufacturing as they are at the moment. A recent article in Wired called "Robots Will Steal Our Jobs, But They Will Give Us New Ones" makes the argument that although many current jobs will be taken by robots new opportunities will arise - the most obvious of which is servicing and maintaining all the robots. Incidentally the photo here is of a robot that can cook a hamburger and the Japanese have robots that can prepare Ramen noodles.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Delete or edit this Recipe

Read More..

Classifying everything using your RPi Camera Deep Learning with the Pi

For those who dont want to read, the code can be found on my github with a readme:
https://github.com/StevenHickson/RPi_CaffeQuery
You can also read about it on my Hackaday io page here.

What is object classification?

Object classification has been a very popular topic the past couple years. Given an image, we want a computer to be able to tell us what that image is showing. The newest trend has been using convolutional neural networks in order to classify networks trained with a large amount of data.

One of the bigger frameworks for this is the Caffe framework. For more on this see the Caffe home page.
You can test out there web demo here. It isnt great at people but it is very good at cats, dogs, objects, and activities.


Why is this useful?

There are all kinds of autonomous tasks you can do with the RPi camera. Perhaps you want to know if your dog is in your living room, so the Pi can take his/her picture or tell him/her they are a good dog. Perhaps you want your RPi to recognize whether there is fruit in your fruit drawer so it can order you more when it is empty. The possibilities are endless.

How do convolutional neural networks work (a VERY simple overview)?

Convolutional neural networks are based loosely off how the human brain works. They are built of layers of many neurons that are "activated" by certain inputs. The input layer is connected in a network through a series of interconnected neurons in hidden layers like so:
[1]

Each neuron sends its signal to any other neuron it is connected to which is then multiplied by the connection weight and run through a sigmoid function. The training of the network is done by changing the weights in order to minimize the error function based on a set of inputs with a known set of outputs using back propagation.

How do we get this on the Pi?

Well I went ahead and compiled Caffe on the RPi. Unfortunately since it doesnt have code to optimize the network with its GPU, the classification takes ~20-25s per image, which is far too much.
Note: I did find a different optimized CNN network for the RPi by Pete Warden here. It looks great but it still takes about 3 seconds per image, which still doesnt seem fast  enough. 

You will also need the Raspberry Pi camera which you can get from here:
Raspberry PI 5MP Camera Board Module

A better option: Using the web demo with python

So we can take advantage of the Caffe web demo and use that to reduce the processing time even further. With this method, the image classification takes ~1.5s, which is usable for a system.

How does the code work?

We make a symbolic link from /dev/shm/images/ to our /var/www for apache and forward our router port 5050 to the Pi port 80. 
Then we use raspistill to take an image and save it to memory as /dev/shm/images/test.jpg. Since this is symlinked in /var/www, we should be able to see it at http://YOUR-EXTERNAL-IP:5005/images/test.jpg.
Then we use grab to qull up the Caffe demo framework with our image and get the classification results. This is done in queryCNN.py which gets the results.

What does the output look like?

Given a picture of some of my Pi components, I get this, which is pretty accurate:

Where can I get the code?

https://github.com/StevenHickson/RPi_CaffeQuery

[1] http://white.stanford.edu/teach/index.php/An_Introduction_to_Convolutional_Neural_Networks

Consider donating to further my tinkering since I do all this and help people out for free.



Places you can find me
Read More..

Play With Spider Spider moves towards your Mouse


Read More..

How to measure translation quality in your user interfaces



Worldwide, there are about 200 languages that are spoken by at least 3 million people. In this global context, software developers are required to translate their user interfaces into many languages. While graphical user interfaces have evolved substantially when compared to text-based user interfaces, they still rely heavily on textual information. The perceived language quality of translated user interfaces (UIs) can have a significant impact on the overall quality and usability of a product. But how can software developers and product managers learn more about the quality of a translation when they don’t speak the language themselves?

Key information in interaction elements and content are mostly conveyed through text. This aspect can be illustrated by removing text elements from a UI, as shown in the the figure below.
Three versions of the YouTube UI: (a) the original, (b) YouTube without text elements, and (c) YouTube without graphic elements. It gets apparent how the textless version is stripped of the most useful information: it is almost impossible to choose a video to watch and navigating the site is impossible.
In "Measuring user rated language quality: Development and validation of the user interface Language Quality Survey (LQS)", recently published in the International Journal of Human-Computer Studies, we describe the development and validation of a survey that enables users to provide feedback about the language quality of the user interface.

UIs are generally developed in one source language and translated afterwards string by string. The process of translation is prone to errors and might introduce problems that are not present in the source. These problems are most often due to difficulties in the translation process. For example, the word “auto” can be translated to French as automatique (automatic) or automobile (car), which obviously has a different meaning. Translators might chose the wrong term if context is missing during the process. Another problem arises from words that behave as a verb when placed in a button or as a noun if part of a label. For example, “access” can stand for “you have access” (as a label) or “you can request access” (as a button).

Further pitfalls are gender, prepositions without context or other characteristics of the source text that might influence translation. These problems sometimes even get aggravated by the fact that translations are made by different linguists at different points in time. Such mistranslations might not only negatively affect trustworthiness and brand perception, but also the acceptance of the product and its perceived usefulness.

This work was motivated by the fact that in 2012, the YouTube internationalization team had anecdotal evidence which suggested that some language versions of YouTube might benefit from improvement efforts. While expert evaluations led to significant improvements of text quality, these evaluations were expensive and time-consuming. Therefore, it was decided to develop a survey that enables users to provide feedback about the language quality of the user interface to allow a scalable way of gathering quantitative data about language quality.

The Language Quality Survey (LQS) contains 10 questions about language quality. The first five questions form the factor “Readability”, which describes how natural and smooth to read the used text is. For instance, one question targets ease of understanding (“How easy or difficult to understand is the text used in the [product name] interface?”). Questions 6 to 9 summarize the frequency of (in)consistencies in the text, called “Linguistic Correctness”. The full survey can be found in the publication.

Case study: applying the LQS in the field

As the LQS was developed to discover problematic translations of the YouTube interface and allow focused quality improvement efforts, it was made available in over 60 languages and data were gathered for all these versions of the YouTube interface. To understand the quality of each UI version, we compared the results for the translated versions to the source language (here: US-English). We inspected first the global item, in combination with Linguistic Correctness and Readability. Second, we inspected each item separately, to understand which notion of Linguistic Correctness or Readability showed worse (or better) values. Here are some results:
  • The data revealed that about one third of the languages showed subpar language quality levels, when compared to the source language.
  • To understand the source of these problems and fix them, we analyzed the qualitative feedback users had provided (every time someone selected the lower two end scale points, pointing at a problem in the language, a text box was surfaced, asking them to provide examples or links to illustrate the issues).
  • The analysis of these comments provided linguists with valuable feedback of various kinds. For instance, users pointed to confusing terminology, untranslated words that were missed during translation, typographical or grammatical problems, words that were translated but are commonly used in English, or screenshots in help pages that were in English but needed to be localized. Some users also pointed to readability aspects such as sections with old fashioned or too formal tone as well as too informal translations, complex technical or legal wordings, unnatural translations or rather lengthy sections of text. In some languages users also pointed to text that was too small or criticized the readability of the font that was used.
  • In parallel, in-depth expert reviews (so-called “language find-its”) were organized. In these sessions, a group of experts for each language met and screened all of YouTube to discover aspects of the language that could be improved and decided on concrete actions to fix them. By using the LQS data to select target languages, it was possible to reduce the number of language find-its to about one third of the original estimation (if all languages had been screened).
LQS has since been successfully adapted and used for various Google products such as Docs, Analytics, or AdWords. We have found the LQS to be a reliable, valid and useful tool to approach language quality evaluation and improvement. The LQS can be regarded as a small piece in the puzzle of understanding and improving localization quality. Google is making this survey broadly available, so that everyone can start improving their products for everyone around the world.
Read More..

Space Wallpapers for your Desktop Set 1

To save the image, first click to enlarge the image, then right-click on it and click on Save Image As.. or simply right-click on the thumbnail and click on Save Link AS..
Note: all are high resolution images greater than 1024 x 728 px.

3D_Earth_Surface.jpg3D_Earth_over_Moon3D_Sunset

3d_space_1

cosmogony

3D_Earth_2

MOON

decollages

3D_Earth
Read More..