Showing posts with label low. Show all posts
Showing posts with label low. Show all posts

Text to Speech for low resource languages episode 2 Building a parametric voice



This is the second episode in the series of posts reporting on the work we are doing to build text-to-speech (TTS) systems for low resource languages. In the previous episode, we described the crowdsourced data collection effort for Project Unison. In this episode, we describe our work to construct a parametric voice based on that data.

In our previous episode, we described building TTS systems for low resource languages, and how one of the objectives of data collection for such systems was to quickly build a database representing multiple speakers. There are two main justifications for this approach. First, professional voice talents are often not available for under-resourced languages, so we need to record ordinary people who get tired reading tedious text rather quickly. Hence, the amount of text a person can record is rather limited and we need multiple speakers for a reasonably sized database that can be used by others as well. Second, we wanted to be able to create a voice that sounds human but is not identifiable as a real person. Various concatenative approaches to speech synthesis, such as unit selection, are not very suitable for this problem. This is because the selection algorithm may join acoustic units from different speakers generating a very unnatural sounding result.

Adopting parametric speech synthesis techniques is an attractive approach to building multi-speaker corpora described above. This is because in parametric synthesis the training stage of the statistical component will take care of multiple-speakers by estimating an averaged out representation of various acoustic parameters representing each individual speaker. Depending on number of speakers in the corpus, their acoustic similarity and ratio of speaker genders, the resulting acoustic model can represent an average voice that is indistinguishable from human and yet cannot be traced back to any actual speakers recorded during the data collection.

We decided to use two different approaches to acoustic modeling in our experiments. The first approach uses Hidden Markov Models (HMMs). This well-established technique was pioneered by Prof. Keiichi Tokuda at Nagoya Institute of Technology, Japan and has been widely adopted in academia and industry. It is also supported by a dedicated open-source HMM synthesis toolkit. The resulting models are small enough to fit on mobile devices.

The second approach relies on Recurrent Neural Networks (RNNs) and vocoders that jointly mimic the human speech production system. Vocoders mimic the vocal apparatus to provide a parametric representation of speech audio that is amenable to statistical mapping. RNNs provide a statistical mapping from the text to the audio and have feedback loops in their topology, allowing them to model temporal dependencies between various phonemes in human speech. In 2015, Yannis Agiomyrgiannakis proposed Vocaine, a vocoder that outperforms the state-of-the-art technology in speed as well as quality. In 2013, Heiga Zen, Andrew Senior and Mike Schuster proposed a neural network-based model that mimics deep structure of human speech production for speech synthesis. The model has further been extended into a Long Short-Term Memory (LSTM) RNN. This allows long term memorization, which is good for speech applications. Earlier this year, Heiga Zen and Hasim Sak described the LSTM RNN architecture that has been specifically designed for fast speech synthesis. The LSTM RNNs are also used in our Automatic Speech Recognition (ASR) systems recently mentioned in our blog.

Using the Hidden Markov Model (HMM) and LSTM RNN synthesizers described above, we experimented with a multi-speaker Bangla corpus totaling 1526 utterances (waveforms and corresponding transcriptions) from five different speakers. We also built a third system that utilizes LSTM RNN acoustic model, but this time we made it small and fast enough to run on a mobile phone.

We synthesized the following Bangla sentence "??? ???? ????? ??????? ??????" translated from “This is an example sentence in Bangla”. Though HMM synthesizer output can sound intelligible, it does exhibit some classic downsides with a voice that sounds buzzy and muffled. With the LSTM RNN configuration for mobile devices, the resulting audio sounds clearer and has improved intonation over the HMM version. We also tried a LSTM RNN configuration with more network nodes (and thus not suitable for low-end mobile devices) to generate this waveform - the quality is slightly better but is not a huge improvement over the more lightweight LSTM RNN version. We hypothesize that this is due to the fact that a neural network with many nodes has more parameters and thus requires more data to train.

These early results are encouraging for several reasons. First, they confirm that natural-sounding speech synthesis based on multiple speakers is practically possible. It is also significant that the total number of recordings used was relatively small, yet were able to build intelligible parametric speech synthesis. This means that it is possible to collect training data for such a speech synthesizer by engaging the help of volunteers who are not professional voice artists, for a short period of time per person. Using multiple volunteers is an advantage: it results in more diverse data, and the resulting synthetic voice does not represent any specific individual. This approach may well be the foundation for bringing speech technology to many more traditionally under-served languages.

NEXT UP: But can it say, “Google”? (Ep.3)
Read More..

HDR Low Light and High Dynamic Range photography in the Google Camera App



As anybody who has tried to use a smartphone to photograph a dimly lit scene knows, the resulting pictures are often blurry or full of random variations in brightness from pixel to pixel, known as image noise. Equally frustrating are smartphone photographs of scenes where there is a large range of brightness levels, such as a family photo backlit by a bright sky. In high dynamic range (HDR) situations like this, photographs will either come out with an overexposed sky (turning it white) or an underexposed family (turning them into silhouettes).

HDR+ is a feature in the Google Camera app for Nexus 5 and Nexus 6 that uses computational photography to help you take better pictures in these common situations. When you press the shutter button, HDR+ actually captures a rapid burst of pictures, then quickly combines them into one. This improves results in both low-light and high dynamic range situations. Below we delve into each case and describe how HDR+ works to produce a better picture.

Capturing low-light scenes

The camera on a smartphone has a small lens, meaning that it doesnt gather much light. If a scene is dimly lit, the resulting photograph will contain image noise. One solution is to lengthen the exposure time - how long the sensor chip collects light. This reduces noise, but since its hard to hold a smartphone perfectly steady, long exposures have the unwanted side effect of blurring the shot. Devices with optical image stabilization (OIS) sense this "camera shake” and shift the lens rapidly to compensate. This allows longer exposures with less blur, but it can’t help with really dark scenes.

HDR+ addresses this problem by taking a burst of shots with short exposure times, aligning them algorithmically, and replacing each pixel with the average color at that position across all the shots. Averaging multiple shots reduces noise, and using short exposures reduces blur. HDR+ also begins the alignment process by choosing the sharpest single shot from the burst. Astronomers call this lucky imaging, a technique used to reduce the blurring of images caused by Earths shimmering atmosphere.
A low light example is captured at dusk. The picture at left was taken with HDR+ off and the picture at right with HDR+ on. The HDR+ image is brighter, cleaner, and sharper, with much more detail seen in the subject’s hair and eyelashes. Photos by Florian Kainz
Capturing high dynamic range scenes

Another limitation of smartphone cameras is that their sensor chips have small pixels. This limits the cameras dynamic range, which refers to the span between the brightest highlight that doesnt blow out (turn white) and the darkest shadow that doesnt look black. One solution is to capture a sequence of pictures with different exposure times (sometimes called bracketing), then align and blend the images together. Unfortunately, bracketing causes parts of the long-exposure image to blow out and parts of the short-exposure image to be noisy. This makes alignment hard, leading to ghosts, double images, and other artifacts.

However, bracketing is not actually necessary; one can use the same exposure time in every shot. By using a short exposure HDR+ avoids blowing out highlights, and by combining enough shots it reduces noise in the shadows. This enables the software to boost the brightness of shadows, saving both the subject and the sky, as shown in the example below. And since all the shots look similar, alignment is robust; you won’t see ghosts or double images in HDR+ images, as one sometimes sees with other HDR software.
A classic high dynamic range situation. With HDR+ off (left), the camera exposes for the subjects’ faces, causing the landscape and sky to blow out. With HDR+ on (right), the picture successfully captures the subjects, the landscape, and the sky. Photos by Ryan Geiss
Our last example illustrates all three of the problems we’ve talked about - high dynamic range, low light, and camera shake. With HDR+ off, a photo of Princeton University Chapel (shown below) taken with Nexus 6 chooses a relatively long 1/12 second exposure. Although optical image stabilization reduces camera shake, this is a long time to hold a camera still, so the image is slightly blurry. Since the scene was very dark, the walls are noisy despite the long exposure. Therefore, strong denoising is applied, causing smearing (below, left inset image). Finally, because the scene also has high dynamic range, the window at the end of the nave is blown out (below, right inset image), and the side arches are lost in darkness.
Click here to see the full resolution image. Photo by Marc Levoy
HDR+ mode performs better on all three problems, as seen in the image below: the chandelier at left is cleaner and sharper, the window is no longer blown out, there is more detail in the side arches, and since a burst of shots are captured and the software begins alignment by choosing the sharpest shot in the burst (lucky imaging), the resulting picture is sharp.
Click here to see the full resolution image. Photo by Marc Levoy
Heres an album containing these comparisons and others as high-resolution images. For each scene in the album there is a pair of images captured by Nexus 6; the first was was taken with HDR+ off, and the second with HDR+ on.

Tips on using HDR+

Capturing a burst in HDR+ mode takes between 1/3 second and 1 second, depending on how dark the scene is. During this time youll see a circle animating on the screen (left image below). Try to hold still until it finishes. The combining step also takes time, so if you scroll to the camera roll right after taking the shot, youll see a thumbnail image and a progress bar (right image below). When the bar reaches 100%, your HDR+ picture is ready.
Should you leave HDR+ mode on? We do. The only times we turn it off are for fast-moving sports, because HDR+ pictures take longer to capture than a single shot, or for scenes that are so dark we need the flash. But before you turn off HDR+ for these action shots or super-dark scenes, give it a try; we think youll be surprised how well it works!

At this time HDR+ is available only on Nexus 5 and Nexus 6, as part of the Google Camera app.

Read More..

Crowdsourcing a Text to Speech voice for low resource languages episode 1



Building a decent text-to-speech (TTS) voice for any language can be challenging, but creating one – a good, intelligible one – for a low resource language can be downright impossible. By definition, working with low resource languages can feel like a losing proposition – from the get go, there is not enough audio data, and the data that exists may be questionable in quality. High quality audio data, and lots of it, is key to developing a high quality machine learning model. To make matters worse, most of the world’s oldest, richest spoken languages fall into this category. There are currently over 300 languages, each spoken by at least one million people, and most will be overlooked by technologists for various reasons. One important reason is that there is not enough data to conduct meaningful research and development.

Project Unison is an on-going Google research effort, in collaboration with the Speech team, to explore innovative approaches to building a TTS voice for low resource languages – quickly, inexpensively and efficiently. This blog post will be one of several to track progress of this experiment and to share our experience with the research community at large – our successes and failures in a trial and error, iterative approach – as our adventure plays out.

One of the most critical aspects of building a TTS system is acquiring audio data. The traditional way to do this is in a professional recording studio with a voice talent, sound engineer and a voice director. The process can take considerable time and can be quite expensive. People often mistake voice talent work to be similar to a news reader, but it is highly specialized and the work can be very difficult.

Such investments in time and money may yield great audio, but the catch is that even if you’ve created the best TTS voice from these recordings, at best it will still sound exactly like the voice talent - the person who provided the raw audio data. (We’ve read the articles about people who have fallen for their GPS voice to find that they are real people with real names.) So the interesting problem here from a research perspective is how to create a voice that sounds human but is not identifiable as a singular person.

Crowd-sourcing projects for automatic speech recognition (ASR) for Google Voice Search had been successful in the past, with public volunteers eager to participate by providing voice samples. For ASR, the objective is to collect from a diversity of speakers and environments, capturing varying regional accents. The polar opposite is true of TTS, where one unique speaker, with the standard accent and in a soundproof studio is the basic criteria.

Many years ago, Yannis Agiomyrgiannakis, Digital Signal Processing researcher on the TTS team in Google London, wrote a “manifesto” for acoustic data collection for 2000 languages. In his document, he gave technical specifications on how to convert an average room into a recording studio. Knot Pipatsrisawat, software engineer in Google Research for Low Resource Languages, built a tool that we call “ChitChat”, a portable recording studio, using Yannis’ specifications. This web app allows users to read the prompt, playback the recording and even assess the noise level of the room.
From other past research in ASR, we knew that the right tool could solve the crowd sourcing problem. ChitChat allowed us to experiment in different environments to get an idea of what kind of office space would work and what kind of problems we might encounter. After experimenting with several different laptops and tablets, we were able to find a computer that recognized the necessary peripherals (the microphone, USB converter, and preamp) for under $2,000 – much cheaper than a recording studio!

Now we needed multiple speakers of a single language. For us, it was a no-brainer to pilot Project Unison with Bangladeshi Googlers, all of whom are passionate about getting Google products to their home country (the success of Android products in Bangladesh is an example of this). Googlers by and large are passionate about their work and many offer their 20% time as a way to help, to improve or to experiment on something that may or may not work because they care. The Bangladeshi Googlers are no exception. They embodied our objectives for a crowdsourcing innovation: out of many, we could achieve (literally) one voice.

With multiple speakers, we would target speakers of similar vocal profiles and adapt them to create a blended voice. Statistical parametric synthesis is not new, but the advances in recent technology have improved quality and proved to be a lightweight solution for a project like ours.

In May of this year, we auditioned 15 Bangaldeshi Googlers in Mountain View. From these recordings, the broader Bangladeshi Google community voted blindly for their preferred voice. Zakaria Haque, software engineer in Machine Intelligence, was chosen as our reference for the Bangla voice. We then narrowed down the group to five speakers based on these criteria: Dhaka accent, male (to match Zakaria’s), similarity in pitch and tone, and availability for recordings. The original plan of a spectral analysis using PRAAT proved to be unnecessary with our limited pool of candidates.

All 5 software engineers – Ahmed Chowdury, Mohammad Hossain, Syeed Faiz, Md. Arifuzzaman Arif, Sabbir Yousuf Sanny – plus Zakaria Haque recorded over 3 days in the anechoic chamber, a makeshift sound-proofed room at the Mountain View campus just before Ramadan. HyunJeong Choe, who had helped with the Korean TTS recordings, directed our volunteers.
Left: TPM Mohammad Khan measures the distance from the speaker to the mic to keep the sound quality consistent across all speakers. Right: Analytical Linguist HyunJeong Choe coaches SWE Ahmed Chowdury on how to speak in a friendly, knowledgeable, "Googly" voice
ChitChat allowed us to troubleshoot on the fly as recordings could be monitored from another room using the admin panel. In total, we recorded 2000 Bangla and English phrases mined from Wikipedia. In 30-60 minute intervals, the participants recorded over 250 sentences each.

In this session, we discovered an issue: a sudden drop in amplitude at high frequencies in a few recordings. We were worried that all the recordings might have to be scrapped.
As illustrated in the third image, speaker3 has a drop in energy above 13kHz which is visible in the graph and may be present at speech, distorting the speaker’s voice to sound as if he were speaking through a tube.
Another challenge was that we didn’t have a pronunciation lexicon for Bangla as spoken in Bangladesh. We worked initially with the publicly available TTS data from the Indian Institute of Information Technology, but this represented the variant of Bangla spoken in West Bengal (India), which differs from the speech we recorded. Our internally designed pronunciation rules for Bengali were also aimed at West Bengal and would need to be revised later.

Deciding to proceed anyway, Alexander Gutkin, Speech software engineer and lead for TTS for Low Resource Languages in Google London, built an initial prototype voice. Using the preliminary text normalization rules created by Richard Sproat, Speech and Language Processing researcher, the first voice we attempted proved to be surprisingly good. The problem in the high frequencies we had seen in the recordings is undetectable in the parametric voice.
When we return to the sound studio to record an additional 200 longer sentences, we plan to try an upgrade of the USB converter. Meanwhile, Martin Jansche, Natural Language Understanding software engineer, has worked with a team of native speakers on a pronunciation and lexicon and model that better matches the phonology of colloquial Bangladeshi Bangla. Alexander will use the additional recordings and the new pronunciation dictionary to build the second version.

NEXT UP: Building a parametric voice with multiple speaker data (Ep.2)
Read More..

Low cost Wireless I O’s using PLC HMI ZIGBEE



Low cost Wireless I/O’s using PLC, HMI & ZIGBEE



Abstract— During the past decade, the industrial sector throughout the world has shifted from the classical methods of Control and Automation to the state of the art techniques. This allowed the industries to attain a higher percentage of growth and production, which consequently gave rise to reduction in costs of the products. This trend of automation is gaining popularity at a very slow pace due to huge initial costs associated with it. This problem can be addressed by promoting Wireless I/O’s interfaced to Programmable Logic Controllers using Zigbee, which might encourage the industries to take the path of modern automation.

KeywordsWireless I/O’s, PLC, HMI, ZIGBEE Automation System.
I.      Introduction

PLC’s are solid state devices using integrated circuits to control process or machines.  They  can  store instructions  like  sequencing  counting,  timing, arithmetic,  data  manipulation  and  communication [7]. A PLC is an example of a hard real time system since output results must be produced in response to input conditions within  a  bounded  time,  otherwise unintended  operation  will  result.PLC  reads  the status  of  the  external  input  devices,  e.g.  Keypad,  sensor, switch  and  pulses,  and  execute  by  the microprocessor  logic,  sequential,  timing,  counting and arithmetic operations according the status of the input  signals  as  well  as  the  pre-written  program stored in  the PLC  [8]. The generated output signals are sent to  output  devices  as  the  switch  of  a  relay electromagnetic  valve,  motor  drive,  control  of a machine or operation of a procedure for the purpose of machine automation or processing procedure.
Zigbee falls in the category of wireless domain like GSM and RF technology. Zigbee provides the wireless communication. It means Zigbee only reduces the cost and maintenance of the wires used for connections else all the process will be  same  such as Zigbee will provide a particular bit on/off  status  to  the other  side due  to which  same message  or  data  we  can  get  on  the  other  side  as  wire  provides.  Thus Zigbee replaces the connecting wires and provides a wireless communication [1].
As the wireless PLCs use modem for transmitting signals from PLC to the process here we are using Zigbee as the communication interface which is used for transmitting and receiving the signals from the PLC to3 process and vice-versa. Zigbee is a wireless technology developed as an open global standard to address the unique needs of low-cost and low-power wireless personal area networks (WPANs). The Zigbee standard takes full advantage of the IEEE 802.15.4

II.   Method of Interfacing

PLC and SCADA/HMI placed at control room and consists of PLC input and output Module and a TARANG Zigbee for receiving and transferring signal.


Figure 1 Block Diagram
TARANG Zigbee acts as transreceiver. A pair of TARANG Zigbee transmitter and receiver is used where wireless exchange of data takes place and hence two way communication is done. The application “Monitoring” takes place at the field/process site. It is placed in a remote area and consists of sensors and actuators [1].
The process controlled in the project using a PLC is a Batch process used to manufacture a chemical and the parameters to be controlled and measured is level.



Some applications require specific quantities of raw materials be combined in specific ways for particular durations to produce an intermediate or end result. A batch process performs list of actions in a sequence. It executes a series of non-interactive actions all at one time. Once a batch job begins, it continues until it is done. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds).

Benefits of Batch Process
·     A batch process can be used to automate much of the work.
·     Batch processing can save time and energy by automating repetitive tasks.
·     Batch time can be adjusted to meet quality specs
·     Slow dynamics permit real-time calculations

Batch process component-

·     Process Tank 1(for mixing the two liquids in proportion)
·     UNIT Tank 1 (containing liquid A)
·     UNIT Tank 2(containing liquid B)
·     Stirrer
·     Solenoid Valve 1
·     Solenoid Valve 2
·     Solenoid Valve 2
·     High Level Float Switches
The BATCH PROCESS setup is placed in a remote area (field/process site) and is controlled from the control room wirelessly using TARANG Zigbee where PLC, SCADA and the control panel are placed. The process is controlled through SCADA&PLC in auto mode and can also be controlled in manual mode. The process can be controlled and monitored directly from SCADA and also from the control panel.
            The main components are TARANG Zigbee and PLC which control the whole process.The TARANG is used as the source and which is used in direct mode.PLC is used to control and monitor the batch process. [1]




III. TARANG ZigBee

TRANG modules are designed with low to medium transmit power and for high reliability wireless networks. The modules require minimal power and provide reliable delivery of data between devices. The interfaces provided with the module help to directly fit into many industrial applications. The modules operate within the ISM 2.4-2.4835 GHz frequency band with IEEE 802.15.4 baseband. [3]

IV.Role of TMFT software

TMFT Software is used to configure the TARANG Zigbee via RS -232 or USB cable. [5]

Module Programming:
Step1: Open TMFT Software
Step2: Connect the TARANG module to the Serial/USB      Port.
Step3: Choose the appropriate Port and serial parameters in terminal software& press query modem.
 
Step 4: For setting I/O pins as input and output the following steps should be followed-
Step 5: Enter the command mode with ‘+++’
Response from modem should be ok.
Step 6: Enable the desired I/O pin as input with command ATIDxx. In this example first I/O line ID0 is used. For configuring it to Digital I/O input, send command as ATID02. Response from module should be ‘OK’.
Step 7: Write these parameters to memory with ‘ATGWR’ command.
Step 8: Follow the same steps for configuring I/O pins to INPUT.
Step 9: Exit command mode with ‘ATGEX’ command.
Note: Once I/O pins are configured to input their default status Will be logic high (3.3V).
Step 10: Enable the desired I/O pin as input with command ATIDxx. In this example first I/O line ID0 is used. For configuring it to Digital I/O input, send command as ATID40. Response from module should be ‘OK’.




V.     DESIGN AND IMPLEMENTATION
The PLCs provide analog and digital series input/output that can be used to control the field devices. For the PLC to be made to control data wirelessly, a wireless interface is needed. The messages from the controller are sent to PLCs through the RF transceivers. Thus, two RF transceiver circuits have to be developed such that they are able to communicate with each other as Process Side & Control Side.
 PROCESS SIDE:
The figure 6 shows process side hardware. The components used on process side consist of three tanks, Zigbee, Solenoid valve, Float switch and relay card. The Zigbee receives the data & control the process. The circuit diagram or wiring diagram is given below





Figure 5 Process Side Hardware
 CONTROL SIDE:



The above figure shows the control panel which controls the field or process site. The main components are TARANG Zigbee and PLC which control the whole process. The TARANG is used as the source and which is used in direct mode.PLC is used to control and monitor the batch process. The circuit diagram or wiring diagram is given below

VI. WORKING-
When the START button is pressed by the user batch process starts. Then according to control algorithm written in PLC, the control signals from output module of the controller is given to TARANG Zigbee ‘A’, which acts as a transmitter. The first solenoid valve 1 is ON for time set in HMI screen from UNIT TANK 1 and then solenoid valve 2 is on from UNIT TANK 2. Simultaneously if PROCESS tank becomes overflow then either solenoid valve 1 or solenoid valve 2 is off and avoid overflow condition. After this agitator is on which mix the two liquids uniformly. As soon as agitator is off we get uniform mixed liquid and solenoid valve 3 is on and we get the output. At this operation TARANG Zigbee on process side acts as a receiver and TARANG Zigbee on control side acts as a transmitter
Now if the tank level of UNIT TANK 1, UNIT TANK 2 or PROCESS tank becomes high the signal is received by Zigbee on process side and sends this signal to control side Zigbee which monitor on input section of PLC and SCADA. At this operation TARANG Zigbee on control side acts as a receiver and TARANG Zigbee on process acts as a transmitter.

 

VII.             HMI



HMI stands for Human Machine Interface. It is not a full control system, but rather focuses on the supervisory level. As such, it is a purely hardware package that is positioned on top of hardware to which it is interfaced, in general via Programmable Logic Controllers (PLCs), or other commercial hardware modules.[6]



It is a small scale control system for automated industrial processes like municipal water supplies, power generation, steel manufacturing etc.

HMI systems monitor and control these operations by gathering data from sensors at the facility or remote station and then sending it to a central computer system that manages the operations using this information.
The above figure shows the main MENU screen of HMI. By using this operator can select the mode of operation like auto mode or manual modem, we can check the status of overall process, by using setting we can set the timer for solenoid valve for on and off operation. By using I/O list we can check the status of input and output. Alarm is used for displaying overflow condition.
VIII.            Conclusion
Batch Process is a very important and a widely used process in today’s Industries. This is an effort to eliminate the use of cables for transmission and overcome a major drawback in a mini setup. We achieve the task of monitoring as well as controlling the Batch Process from a remote location via a control panel, PLC program as well as HMI. The parameter controlled in a closed loop process is Level.
We have used TARANG Zigbee to transmit signals wirelessly between the Control room and the Process.
References
[1]    Kaushik Bhuiya, Kintali Anish, “Low cost wireless control and monitoring using      PLC and SCADA”,International Journal of Scientific and Research Publications, Volume  Issue 9, September 2013 ISSN 2250-3153
[2]    Programmable Logic Controller Instruction Sheet by DELTA.
[3]    www.melangesystems.com
[4]    S.S.Bidwai, V.B.Kumbhar. “Real Time Automated Control using PLC-VB Communication” International Journal of Engineering Research and Applications (IJERA) Vol. 3, Issue 3, pp.658-661 (Jun 2013).
[5]    TMFT 2.6 test utility and configuration software manual.
[6]    DELTA PLC Manual,” Delta Electronics Inc,www.delta.com.tw/industrialautomation.
[7]    T.Kalaiselvi,  R.Praveena  ,Aakanksha.R,, Dhanya.S,  PLC  Based  Automatic  Bottle Filling  and  Capping  System  With  User Defined  Volume  Selection,”  IJETAE.,     Volume 2, Issue 8, (August 2012).
[8]    T.Kalaiselvi,  R.Praveena  ,Aakanksha.R,, Dhanya.S,  PLC  Based  Automatic  Bottle Filling  and  Capping  System  With  User Defined  Volume  Selection,”  IJETAE.,     Volume 2, Issue 8, (August 2012).
Read More..