Pages

Monday, December 20, 2021

About Touch Screen Technology – part 2

The Technology Behind It

The idea of using a touchscreen goes back to the early days of computing in the 1960s. Most systems remained very experimental until the 1990s, when a number of commercial systems became successful.

The first major technology that became successful is resistive touchscreen technology. This uses a panel that consists of several layers, including two thin, electrically charged layers separated by a thin space. By pressing on the panel, these two layers touch and the location of the connection is recorded as the input. Resistive touchscreens are relatively cheap and very resistant to liquids. The major downsides are that you need to actually press down with a certain amount of pressure and they have relatively poor contrast. As a result, resistive touchscreens did not become widely used for computer systems but instead were developed as part as other electronic systems. For example, the display screens used in restaurants to enter orders and control panels in factories are often made of resistive touchscreens.

The second major technology used in touchscreens is capacitive sensing. A capacitive touchscreen uses a layer of insulating material, such as glass, coated with a transparent conductor. The human body is also an electrical conductor, so touching the screen with your finger results in a change in the electrostatic field of the screen. A number of different approaches can be used to record the location of the touch. One of the most common ways is to use a fine grid of capacitors, which record the change in the electrostatic field. These capacitors are organized by rows and columns, and they function independently of each other. This makes it possible to record multiple touches at the same time, known as multi-touch technology.

Multi-Touch Gestures

At a basic level, touchscreen technology works similar to a computer mouse. Instead of moving your pointer with a mouse and then clicking on a location, you press on the location with your finger. However, multi-touch technology makes it possible to interact with the computer display in many different ways. Tasks like scrolling up and down a webpage, selecting text and drag-and-drop have become quite different using touchscreen technology. This figure shows a number of the most common single and multi-touch gestures.



Common single and multi-touch gestures

 

From <https://study.com/academy/lesson/touchscreen-technology-definition-lesson-quiz.html>

4 Touch Panel Types – Explained

Touch screens are found everywhere from our smartphones to self-serve kiosks at the airport. Given their many uses, it should come as no surprise that there are several touch monitor types. Each has its advantages and disadvantages and is suited to specific tasks.

Continue reading to learn more about touch monitor types and how they’re used. Or discover Viewsonic’s touchscreen displays here.

Did you know that touch panel technology was invented in the 60s?

That’s right. Long before your precious smartphone entered the market in the late 00s, touch panels had already been an established technology for nearly 4 decades. 

Despite the panels’ simplicity of use, the underlying technology is more complex than it appears, with 4 different touch panel types in existence. 

Before we get to that, let’s back up.

It’s quite possible that you’re not clear on exactly what a touch panel is, what the touch panel types are, or how they’re applied in your daily life, beyond that of your smartphone. For that and more, we’re here to help. 

What Are Touch Panels?

Quite simply, touch panels, which are also known as touchscreens or touch monitors, are tools that allow people to operate computers through direct touch. More specifically, via the use of internal sensors, a user’s touch is detected, then translated, into an instructional command that parlays into visible function. 

Touch Panel Types in the Professional World

It would be a mistake to assume that the applications of all these touch panel types are limited to that of consumer-level devices, or even those that have been previously mentioned. Really, these touch panel types can be found throughout everyday life and in a variety of industries.

What’s more is that in many of these industries, these touch panel types are used less to market products to consumers, and more to sell solutions to businesses. Whether it be in regards to finance, manufacturing, retail, medicine, or education, there is always a need for touch-based solutions. In conjunction with the so-called ‘Internet-of-things’, these touch-based solutions play a key role in practices related to industry 4.0.

In practice, these solutions largely offer a form of personnel management. In hospitals, stores, or banks, for instance, these touch panel types can be used to answer basic questions, provide product information, or offer directions, based on the user’s needs. When it comes to manufacturing, on the other hand, these solutions enable employee management in the possible form of workplace allocation or attendance tracking. 


At the end of the day, touch panels are here to stay. In the four decades since their inception, the level of adoption this technology has experienced is remarkable. They transform how we teach in classrooms and collaborate with colleagues. 

Although you may not have been clear on the specific details of each touch panel type, we hope that you are now. This knowledge will absolutely serve you well, particularly if you’re interested in ViewSonic’s selection of touch-based solutions.

From <https://www.viewsonic.com/library/business/touch-panel-types-explained/>

A Brief Note on Touch Screen Technology

In this article, we will learn about a technology that has become an integral part of our lives: The Touch Screen Technology.  You can find Touchscreens almost everywhere like mobile phones, tablets, laptops, automobiles, gaming consoles, printers, elevators, industries, ATMs, shopping malls, ticket vending machines to name a few.

With the increase in demand for intuitive and easy GUI (graphical user interface), the development in touch screen technology has also taken an exponential curve. So, we will try to learn a little about touch screen technology, different types of touch screen technologies available, the advantages and disadvantages of each technology etc.

What is a Touch Screen?

Simply speaking, a Touch Screen is an input device in an electronic system. Traditionally, if we take our computers, the input devices include keyboard and mouse. But in a touch screen, you can provide the input to the system, well, by simply touching the screen.

A touch screen device may or may not include an electronic display unit but in most cases the touch screen technology is usually fixed on top of a display unit (like in a mobile phone).

The way we interact with our electronic devices like TVs and mobiles has been completely changed with the touch screen technology. For example, the interaction with a computer is made very simple as you control the computer directly through its display without the need for other input devices.

2D Human Machine Interface

Touch screens is a type of User Interface, which allows touch based human machine interface. It is considered to be a two-dimensional sensing device. If you take buttons into account (touch or tactile), they provide a single point of contact. Hence, they are one-dimensional input devices.



Coming to touch screens (or touch pads), you can touch, drag, write swipe, pinch etc. in the x-y plane. Hence, they are two-dimensional input devices.

There is a three-dimensional user interface known as Gesture Control, where hand gestures in free space act as input.

Components of a Touch Screen

Any touch screen device, whether a mobile phone or a tablet computer, usually consists of three important components. They are:

Touch Sensor

Controller

Software    

The Touch Sensor is the device which measures the parameters of contact between the device and the device and an object. It measures the contact force at any point.

For more information on touch sensors, read “Touch Sensors”.

Controller is responsible for capturing the “touch” information from the touch sensor and provide it to a main controlling device like a microcontroller or a processor.

Finally, the software is responsible for the main microcontroller or processor to work in harmony with the touch sensor and its controller.

Touch Screen Technology Types

Based on the types of Touch Sensor used in the development of a touch screen, there are 5 types of touch screen technologies. They are:

·         Resistive Touch Screen Technology

·         Capacitive Touch Screen Technology

·         Infrared Touch Screen Technology

·         Acoustic Wave Touch Screen Technology

·         Near Field Imaging Touch Screen Technology

Let us briefly understand about each of these technologies. But before going into the details, one point you should remember is that almost all touch screen devices are usually part of a display unit like an LCD, TFT, LED, CRT etc.

WHICH TYPE OF TOUCH SCREEN IS BEST FOR YOU?

You interact with a touch screen monitor constantly throughout your daily life. You will see them in cell phones, ATM’s, kiosks, ticket vending machines, manufacturing plants and more. All of these use touch panels to enable the user to interact with a computer or device without the use of a keyboard or mouse. But did you know there are several uniquely different types of Touch Screens? The five most common types of touch screen are: 5-Wire Resistive, Surface Capacitive touch, Projected Capacitive (P-Cap), SAW (Surface Acoustic Wave), and IR (Infrared).

 VIEW ALL TOUCH SCREENS  

We are often asked “How does a touch screen monitor work?” A touch screen basically replaces the functionality of a keyboard and mouse. Below is a basic description of 5 types of touch screen monitor technology. The advantages and disadvantages of type of touch screen will help you decide which type touchscreen is most appropriate for your needs:

Resistive Touch Screen

5-Wire Resistive Touch is the most widely touch technology in use today. A resistive touch screen monitor is composed of a glass panel and a film screen, each covered with a thin metallic layer, separated by a narrow gap. When a user touches the screen, the two metallic layers make contact, resulting in electrical flow. The point of contact is detected by this change in voltage. 

Advantages:

·         Can activate with virtually any object (finger, stylus, gloved hand, pen, etc.)

·         Has tactile feel

·         Lowest cost touch technology

·         Low power consumption

·         Resistant to surface contaminants and liquids (dust, oil, grease, moisture)

Disadvantages:

·         Lower image clarity compared to other touch technologies

·         Outer polyester film is vulnerable to damage from scratching, poking and sharp object

Surface Capacitive Touch Screen

Surface Capacitive touch screen is the second most popular type of touch screens on the market. In a surface capacitive touch screen monitor, a transparent electrode layer is placed on top of a glass panel. This is then covered by a protective cover. When an exposed finger touches the monitor screen, it reacts to the static electrical capacity of the human body. Some of the electrical charge transfers from the screen to the user. This decrease in capacitance is detected by sensors located at the four corners of the screen, allowing the controller to determine the touch point. Surface capacitive touch screens can only be activated by the touch of human skin or a stylus holding an electrical charge.

Advantages:

·         Better image clarity than Resistive Touch

·         Durable screen

·         Excellent resistance to surface contaminants and liquids (dust, oil, grease, water droplets)

·         High scratch resistance

Disadvantages:

·         Requires bare finger or capacitive stylus for activation

·         Sensitivity to EMI/RFI

Projected Capacitive Touch Screen

Projected Capacitive (P-Cap) is similar to Surface Capacitive, but it offers two primary advantages. First, in addition to a bare finger, it can also be activated with surgical gloves or thin cotton gloves. Secondly, P-Cap enables multi-touch activation (simultaneous input from two or more fingers). A projected capacitive touch screen is composed of a sheet of glass with embedded transparent electrode films and an IC chip. This creates a three dimensional electrostatic field. When a finger comes into contact with the screen, the ratios of the electrical currents change and the computer is able to detect the touch points. All our P-Cap touch screens feature a Zero-Bezel enclosure.

Advantages:

·         Excellent image clarity

·         More resistant to scratching than resistive

·         Resistant to surface contaminants and liquids (dust, oil, grease, moisture)

·         Multi-touch (two or more touch points)

Disadvantages:

·         Sensitive to EMI/RFI

·         Must be activated via exposed finger, or thin surgical or cotton gloves

SAW (Surface Acoustic Wave) Touch

SAW (Surface Acoustic Wave) touch screen monitors utilize a series of piezoelectric transducers and receivers. These are positioned along the sides of the monitor’s glass plate to create an invisible grid of ultrasonic waves on the surface. When the panel is touched, a portion of the wave is absorbed. This allows the receiving transducer to locate the touch point and send this data to the computer. SAW monitors can be activated by a finger, gloved hand, or soft-tip stylus. SAW monitors offer easy use and high visibility.

Advantages:

·         Excellent image clarity

·         Even better scratch resistance than surface or projected capacitive

·         High “touch-life”

Disadvantages:

·         Will not activate with hard items (pen, credit card, or fingernail)

·         Water droplets remaining on the surface of the screen can cause false triggering

·         Solid contaminants on the screen can create non-touch areas until they are removed

IR (Infrared) Touch Screen


IR (Infrared) type touch screen monitors do not overlay the display with an additional screen or screen sandwich. Instead, infrared monitors use IR emitters and receivers to create an invisible grid of light beams across the screen. This ensures the best possible image quality. When an object interrupts the invisible infrared light beam, the sensors are able to locate the touch point. The X and Y coordinates are then sent to the controller.

 

 

Advantages:

·         Highest image clarity and light transmission of all touch technologies

·         Unlimited “
touch-life”

·         Impervious to surface scratches

·         Multi-touch (two or more touch points)

·         Palm Rejection Capability

Disadvantages:

·         Accidental activation may occur because the infrared beams are actually above the glass surface

·         Dust, oil, or grease buildup on screen or frame could impede light beam causing malfunction

·         Buildup of snow and pooling of water (such as rain) can cause false triggering

·         May be sensitive to direct high ambient light interference

·         Higher cost 

From <https://tru-vumonitors.com/touch-screen-basics/>

 

For all discussed seminar topics list click here Index.

…till next post, bye-bye and take care.



Sunday, December 19, 2021

About Touch Screen Technology

Traditional input devices for computer systems include keyboards and mice. In recent years, touchscreen technology has become widely used as a way to interact with computer systems, particular for mobile devices.

touchscreen is an electronic visual display that a user can control by touching the screen with one or more fingers. A touchscreen allows for a much more direct interaction with what is displayed compared to a device like a mouse. Touchscreens have become very common on tablet computers, smart phones and other mobile devices. Increasingly, regular laptop and desktop computers use touchscreen displays so users can use both touch as well as more traditional ways of input.

 

Touch screen technology: as per Wikipedia

A touchscreen or touch screen is the assembly of both an input ('touch panel') and output ('display') device. The touch panel is normally layered on the top of an electronic visual display of an information processing system. The display is often an LCD AMOLED or OLED display while the system is usually a laptop, tablet, or smartphone. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus or one or more fingers. Some touchscreens use ordinary or specially coated gloves to work while others may only work using a special stylus or pen. The user can use the touchscreen to react to what is displayed and, if the software allows, to control how it is displayed; for example, zooming to increase the text size.

The touchscreen enables the user to interact directly with what is displayed, rather than using a mouse, touchpad, or other such devices (other than a stylus, which is optional for most modern touchscreens).

Touchscreens are common in devices such as game consoles, personal computers, electronic voting machines, and point-of-sale (POS) systems. They can also be attached to computers or, as terminals, to networks. They play a prominent role in the design of digital appliances such as personal digital assistants (PDAs) and some e-readers. Touchscreens are also important in educational settings such as classrooms or on college campuses.

The popularity of smartphones, tablets, and many types of information appliances is driving the demand and acceptance of common touchscreens for portable and functional electronics. Touchscreens are found in the medical field, heavy industry, automated teller machines (ATMs), and kiosks such as museum displays or room automation, where keyboard and mouse systems do not allow a suitably intuitive, rapid, or accurate interaction by the user with the display's content.

Historically, the touchscreen sensor and its accompanying controller-based firmware have been made available by a wide array of after-market system integrators, and not by display, chip, or motherboard manufacturers. Display manufacturers and chip manufacturers have acknowledged the trend toward acceptance of touchscreens as a user interface component and have begun to integrate touchscreens into the fundamental design of their products. 

Development

The development of multi-touch screens facilitated the tracking of more than one finger on the screen; thus, operations that require more than one finger are possible. These devices also allow multiple users to interact with the touchscreen simultaneously.

With the growing use of touchscreens, the cost of touchscreen technology is routinely absorbed into the products that incorporate it and is nearly eliminated. Touchscreen technology has demonstrated reliability and is found in airplanes, automobiles, gaming consoles, machine control systems, appliances, and handheld display devices including cellphones; the touchscreen market for mobile devices was projected to produce US$5 billion by 2009.

The ability to accurately point on the screen itself is also advancing with the emerging graphics tablet-screen hybrids. Polyvinylidene fluoride (PVFD) plays a major role in this innovation due its high piezoelectric properties, which allow the tablet to sense pressure, making such things as digital painting behave more like paper and pencil.

 

TapSense, announced in October 2011, allows touchscreens to distinguish what part of the hand was used for input, such as the fingertip, knuckle and fingernail. This could be used in a variety of ways, for example, to copy and paste, to capitalize letters, to activate different drawing modes, etc.

A real practical integration between television-images and the functions of a normal modern PC could be an innovation in the near future: for example "all-live-information" on the internet about a film or the actors on video, a list of other music during a normal video clip of a song or news about a person.

Touchscreen Accuracy

For touchscreens to be effective input devices, users must be able to accurately select targets and avoid accidental selection of adjacent targets. The design of touchscreen interfaces should reflect technical capabilities of the system, ergonomics, cognitive psychology and human physiology.

Guidelines for touchscreen designs were first developed in the 1990s, based on early research and actual use of older systems, typically using infrared grids—which were highly dependent on the size of the user's fingers. These guidelines are less relevant for the bulk of modern devices which use capacitive or resistive touch technology.

 

From the mid-2000s, makers of operating systems for smartphones have promulgated standards, but these vary between manufacturers, and allow for significant variation in size based on technology changes, so are unsuitable from a human factors perspective.

Much more important is the accuracy humans have in selecting targets with their finger or a pen stylus. The accuracy of user selection varies by position on the screen: users are most accurate at the center, less so at the left and right edges, and least accurate at the top edge and especially the bottom edge. The R95 accuracy (required radius for 95% target accuracy) varies from 7 mm (0.28 in) in the center to 12 mm (0.47 in) in the lower corners. Users are subconsciously aware of this, and take more time to select targets which are smaller or at the edges or corners of the touchscreen.

 

This user inaccuracy is a result of parallax, visual acuity and the speed of the feedback loop between the eyes and fingers. The precision of the human finger alone is much, much higher than this, so when assistive technologies are provided—such as on-screen magnifiers—users can move their finger (once in contact with the screen) with precision as small as 0.1 mm (0.004 in).

 

From <https://en.wikipedia.org/wiki/Touchscreen>

 

What is a Touch Screen Technology & Its Working

Touch screen technology is the direct manipulation type of gesture-based technology. Direct manipulation is the ability to manipulate the digital world inside a screen. A Touch screen is an electronic visual display capable of detecting and locating a touch over its display area. This is generally referred to as touching the display of the device with a finger or hand. This technology most widely used in computers, user interactive machines, smartphones, tablets, etc to replace most functions of the mouse and keyboard.

Touch screen technology has been around for a number of years but advanced touch screen technology has come on in leaps and bounds recently. Companies are including this technology in more of their products. The three most common touch screen technologies include resistive, capacitive, and SAW (surface acoustic wave). Most low-end touch screen devices contain a standard printed circuit plug-in board and are used on SPI protocol. The system has two parts, namely; hardware and software. The hardware architecture consists of a stand-alone embedded system using an 8-bit microcontroller, several types of interface, and driver circuits. The system software driver is developed using an interactive C programming language.

What is a Touch Screen Technology?

A touch screen technology is the assembly of a touch panel as well as a display device. Generally, a touch panel is covered on an electronic visual display within a processing system. Here the display is an LCD otherwise OLED whereas the system is normally like a smartphone, tablet, or laptop. A consumer can give input through simple touch gestures by moving the screen using a special stylus otherwise fingers. In some kinds of touch screens, some normal otherwise gloves are used which are coated to work properly whereas others may simply work with the help of a special pen.

The operator uses the touch screen to respond to what is displayed and if the software of the device permits to control how it can be exhibited like zooming the screen to enhance the size of the text. So touch screen allows the operator to communicate directly through the displayed information instead of using a touchpad, mouse, etc. Touch screens are used in different devices like personal computers, game consoles, EVMs, etc Touch screens are also essential in educational institutions like classrooms in the colleges.

Who Invented Touch Screen?

The first concept of a touch screen was described & published in the year 1965 by E.A. Johnson. So, the first touch screen was developed in the 1970s by CERN engineers namely Bent Stumpe 7 Frank Beck. The first touch screen device was created & used in year 1973. The first resistive touch screen was designed in 1975 by George Samuel Hurst however wasn’t launched 7 used until 1982.

How Does Touch Screen Technology Work?

Different types of touchscreen technology work in different methods. Some can detect simply one finger at a time & get very confused if you seek to push in two positions at once. Other types of screens can simply notice and differentiate above one key push at once. There are different components used in touchscreen technology which include the following.

Operation of Touch Screen Panel

A basic touch screen is having a touch sensor, a controller, and a software driver as three main components. The touch screen is needed to be combined with a display and a PC to make a touch screen system.

Touch Sensor

The sensor generally has an electrical current or signal going through it and touching the screen causes a change in the signal. This change is used to determine the location of the touch of the screen.

Controller

A controller will be connected between the touch sensor and PC. It takes information from the sensor and translates it for the understanding of PC. The controller determines what type of connection is needed.

Software Driver

It allows computers and touch screens to work together. It tells OS how to interact with the touch event information that is sent from the controller.

Modes of Touch Screen

The operation of the touch screen can be done in different ways like single tap, double-tap, touch and hold, swipe, pinch.

·         In a single tap, a single touch is used to tap on the screen to open an app otherwise choose an object.

·         In double-tap, multiple touches are used for serving different functionalities like zooming a display, choose a word or set of words.

·         The touch and hold option is mainly used to choose an object to drag it and also it gives the option to unlock the screen otherwise powering ON/OFF.

·         Swiping a finger over the screen is used to type the letters using the keyboard on the screen. It is also used to move the pages from right to left and also close unwanted apps.

·         In pinch, two fingers are used to zoom in or zoom out a display.

Transparent Touch Screen Technology

Transparent touch screens work by using two modern technologies to make a cutting-edge display that is tough to ignore. These touch screens deliver 4K images or HD based on the display size similar to a normal professional screen. The main difference between a transparent and normal touch screen is a clear screen substrate. White pixels appear completely transparent, black pixels not clear. The full variety of RGB colors has the properties of semi-transparent. Transparent touch screens are available in different types like transparent LCD screens and transparent OLED screens.

Why Some Touch Screens Work Only with a Bare Finger?

Once a bare finer is used to tap on the screen then it registers the commands. If you use a gloved finger otherwise a stylus pen then it doesn’t register the commands. So the main reason is conductive properties. There are different kinds of touchscreen technologies available in the market, but the capacitive type is more popular as compared to others because 90% of the touch screens sold and shipped worldwide are powered through capacitive technology.

These touchscreens depend on conductivity to notice touch commands. If you use a stylus or gloved finger to control them, then they won’t record the commands otherwise react to your commands.

Application – Remote Control using Touch Screen Technology

The touch screen is one of the simplest PC interfaces to use, for a larger number of applications. A touch screen is useful for easily accessing the information by simply touching the display screen. The touch screen device system is useful in ranging from industrial process control to home automation.


Touch Screen based Robotic Vehicle- Transmitter

 

From <https://www.elprocus.com/touch-screen-technology-working/>

Touch Screen Properties

The main properties of the touchscreen include the following.

·         Ball drop test

·         Clarity and Brightness

·         Mechanical and Mounting

·         4K vs Full-HD

·         HID Compatible

·         Touchpoints

·         Response Time

·         Touch Resolution

·         Raised Bezel

·         Latency / Lag / Touch Response

Advantages

The advantages of touchscreen technology include the following.

·         Easy to Clean and Maintain

·         Engaging and Interactive

·         Self-Service Feature

·         Keyboard and Mouse are not required

·         Speed and Efficiency

·         Mobility and Space

·         Durability and Resilience

·         Easy User Interface

Disadvantages

The disadvantages of touch screen technology include the following.

·         The display of the device has to be large to operate the screen properly

·         The display will get dirty

·         These are expensive as compared to normal devices

·         Indirect sunlight, it is less efficient to read the screen

·         Battery life is low due to the big bright screen and uses massive computing power

·         Accuracy & Feedback

·         Issues on On-screen Keyboard

·         Issues due to Sensitivity

·         Screen Size

·         Accidental Dialing

Applications

The applications of touchscreen technology include the following. Some of the examples of touchscreens like smartphones, a tablet or a computer & a point of sale device.

·         All-in-One computer

·         Touch screen printer

·         Ticket machine

·         Arcade game

·         Tablet

·         ATM

·         Car GPS

·         Smartphone

·         Signature pads

·         Camera

·         POS machine

·         Car stereo

·         Medical equipment

·         Cash register

·         Large interactive screen

·         Digital camcorder

·         In-flight entertainment screen

·         Laptop

·         Handheld game console

·         E-book

·         Grocery self-checkout machine

·         Kiosk

·         Gas station

·         Sewing machine

·         Fitness machine

·         Electronic whiteboard

·         Factory machine

The touch screen supported most of the computers are Acer, HP, Dell, Microsoft, Lenovo, and other PC designers. And also, some high-end Google Chromebooks use touch screens.

Thus, this is all about an overview of touchscreen technology. The main reasons to choose this technology instead of physical buttons by the manufacturers are; these are instinctive, particularly to younger generations of users. By using this technology, the devices can make smaller. The design of these devices is cheaper. In touch screens, different technologies are used to let the operator operate a screen. Some technologies use a finger whereas others use tools such as a stylus. Here is a question for you, Do touch screens use a keyboard?

From <https://www.elprocus.com/touch-screen-technology-working/>

 

Communication between humans and computer systems has come a long way from the keyboard and mouse. As more and more interaction is being done on mobile devices, touchscreen technology makes it possible to interact with a computer system using direct touch of the electronic display, eliminating the need for a bulky mouse or keyboard. Explore the definition and applications of touchscreen technology. 

 

For all discussed seminar topics list click here Index.

                                                                                                    …till next post, bye-bye and take care. 

Saturday, December 18, 2021

About Optical Computers – part 2

Working Principle of Optical Computer

The working principle of Optical Computer is similar to the conventional computer except with some portions that performs functional operations in Optical mode. Photons are generated by LED’s, lasers and a variety of other devices. They can be used for encoding the data similar to electrons.

Design and implementation of Optical transistors is currently under progress with the ultimate aim of building Optical Computer. Multi design Optical transistors are being experimented with. A ninety degree rotating, polarizing screen can effectively block a light beam. Optical transistors are also made from dielectric materials that have the potential to act as polarizers. Optical logic gates are slightly challenging, but fundamentally possible. They would involve one control and multiple beams that would provide a correct logical output. 


Fig. 6 – (a) Optical Network on Chip (b) Photonic Chip on Circuit

Electrons have one superior advantage in that, silicon channels and copper wires can be turned and electrons would follow. This effect can be emulated in Optical Chips using Plasmonic Nano particles. They are used for turning corners and continue on their path without major power loss or electron conversions.

Most parts of an Optical chip resembles any other commercially found computer chip. Electrons are deployed in the parts that transform or process information. The interconnects however, have drastic changes. These interconnects are used for information shuttling between different chip areas. Instead of electron shuttling, which might slow down when interconnects heat up, light is shuttled. This is because light can be easily contained and has an advantage of less information loss during travel.

Researchers are hoping that this swift communication process might result in the development of exascale computers i.e. computers that perform billions of calculations every second, 1000 times more processing speed than current speediest systems.

Advantages of Optical Computer

The advantages of Optical Computer are:

·         Optical computer has several major advantages of high density, small size, low junction heating, high speed, dynamically scalable and reconfigurable into smaller/ larger networks/ topologies, massive parallel computing ability and AI applications.

·         Apart from speed, Optical interconnections have several advantages. They are impervious to electromagnetic interference and are not prone to electrical short circuits.

·         They offer low-loss transmission and large bandwidth for parallel communication of several channels.

·         Optical processing of data is inexpensive and much easier than the processing done on electronic components.

·         Since photons are not charged, they do not readily interact with one another as electrons. This adds another advantage in that, light beams pass through each other in full duplex operation.

·         Optical materials have greater accessibility and storage density than magnetic materials.

Disadvantages of Optical Computer

The disadvantages of Optical Computer are:

·         Manufacturing Photonic Crystals is challenging.

·         Computation is complex as it involves interaction of multiple signals.

·         Bulky in size.

Future of Optical Computing

We can see interesting developments in lasers and lights. These are taking over the electronics in our computers. Optical technology is currently being promoted for use in parallel processing, storage area networks, Optical Data Networks, Optical Switches, Biometric and Holographic storage devices at airports.

Processors now contain light detectors and tiny lasers that facilitate data transmission through Optical Fiber. Few companies are even developing Optical Processors that use Optical Switches and laser light to do the calculations. One of the foremost promoters ‘Intel’ is creating an Integrated Silicon Photonics link that is capable of transmitting 50 Gigabytes per second of uninterrupted information.

It is speculated that future computers would come without screens where information presentation is made through a hologram, in the air, and above the keyboard. This kind of technology is being made possible by the collaboration of researchers and industrial experts. Also, Optical technology’s most practical use i.e. the ‘Optical Networking business’ is predicted to reach 3.5 billion dollars from 1 billion currently. 

From <https://electricalfundablog.com/optical-computer/>

 

Optical Computing: Solving Problems at the Speed of Light

According to Moore’s law —actually more like a forecast, formulated in 1965 by Intel co-founder Gordon Moore— the number of transistors in a microprocessor doubles about every two years, boosting the power of the chips without increasing their energy consumption. For half a century, Moore’s prescient vision has presided over the spectacular progress made in the world of computing. However, by 2015, the engineer himself predicted that we are reaching a saturation point in current technology. Today, quantum computing holds out hope for a new technological leap, but there is another option on which many are pinning their hopes: optical computing, which replaces electronics (electrons) with light (photons).

The end of Moore’s law is a natural consequence of physics: to pack more transistors into the same space they have to be shrunk down, which increases their speed while simultaneously reducing their energy consumption. The miniaturisation of silicon transistors has succeeded in breaking the 7-nanometre barrier, which used to be considered the limit, but this reduction cannot continue indefinitely. And although more powerful systems can always be obtained by increasing the number of transistors, in doing so the processing speed will decrease and the heat of the chips will rise.

THE HYBRIDIZATION OF ELECTRONICS AND OPTICS

Hence the promise of optical computing: photons move at the speed of light, faster than electrons in a wire. Optical technology is also not a newcomer to our lives: the vast global traffic on the information highways today travels on fibre optic channels, and for years we have used optical readers to burn and read our CDs, DVDs and Blu-Ray discs. However, in the guts of our systems, the photons coming through the fibre optic cable must be converted into electrons in the microchips, and in turn these electrons must be converted to photons in the optical readers, slowing down the process.

The overhead view of a new beamsplitter for silicon photonics chips that is the size of one-fiftieth the width of a human hair. Credit: Dan Hixson/University of Utah College of Engineering


Thus, it can be said that our current technology is already a hybridization of electronics and optics. “In the near-term, it is pretty clear that hybrid optical-electronic systems will dominate,” Rajesh Menon, a computer engineer at the University of Utah, tells OpenMind. “For instance, the vast majority of communications data is channelled via photons, while almost all computation and logic is performed by electrons.” And according to Menon, “there are fundamental reasons for this division of labour,” because while less energy is needed to transmit information in the form of photons, the waves associated with the electrons are smaller; that is, the higher speed of photonic devices has as its counterpart a larger size.

This is why some experts see limitations in the penetration of optics in computing. For Caroline Ross, a materials science engineer at the Massachusetts Institute of Technology (MIT), “the most important near-term application [for optics] is communications — managing the flow of optical data from fibres to electronics.” The engineer, whose research produced an optical diode that facilitates this task, tells OpenMind that “the use of light for actual data processing itself is a bit further out.”

THE LASER TRANSISTOR

But although we are still far from the 100% optical microchip —a practical system capable of computing only by using photons— advances are increasing the involvement of photonics in computers. In 2004, University of Illinois researchers Milton Feng and Nick Holonyak Jr. developed the concept of the laser transistor, which replaces one of the two electrical outputs of normal transistors with a light signal in the form of a laser, providing a higher data rate.

For example, today it is not possible to use light for internal communication between different components of a computer, due to the equipment that would be necessary to convert the electrical signal to optical and vice versa; the laser transistor would make this possible. “Similar to transistor integrated circuits, we hope the transistor laser will be [used for] electro-optical integrated circuits for optical computing,” Feng told OpenMind. The co-author of this breakthrough is betting on optical over quantum computing, since it does not require the icy temperatures at which quantum superconductors must operate.

Graduate students Junyi Wu and Curtis Wang and Professor Milton Feng found that light stimulates switching speed in the transistor laser. Credit: L. Brian Stauffer


Proof of the interest in this type of system is the intense research in this field, which includes new materials capable of supporting photon-based computing. Among the challenges still to be met in order to obtain optical chips, Menon highlights the integration density of the components in order to reduce the size, an area in which his laboratory is a pioneer, as well as a “better understanding of light-matter interactions at the nanoscale.”

Despite all this, we shouldn’t be overly confident that a photonic laptop will one day reach the hands of consumers. “We don’t expect optical computing to supplant electronic general-purpose computing in the near term,” Mo Steinman, vice president of engineering at Lightelligence, a startup from the photonics lab run by Marin Soljačić at MIT, told OpenMind.

Present and future of photonics

However, the truth is that nowadays this type of computing already has its own niches. “Application-specific photonics is already here, particularly in data centres and more recently in machine learning,” says Menon. In fact, Artificial Intelligence (AI) neural networks are being touted as one of its great applications, with the potential to achieve 10 million times greater efficiency than electronic systems. “Statistical workloads such as those employed in AI algorithms are perfectly suited for optical computing,” says Steinman.

Thus, optical computing can solve very complex network optimization problems that would take centuries for classical computers. In Japan, the NTT company is building a huge optical computer that encloses five kilometres of fibre in a box the size of a room, and will be applied to complicated power or communications networks enhancement tasks.

A photonic integrated circuit. Credit: JonathanMarks


“Looking ahead, we believe we can leverage the ecosystem created by optical telecommunications in the areas of integrated circuit design, fabrication, and packaging, and optimize for the specific operating points required by optical computing,” Steinman predicts. However, he admits that moving from a prototype to full-scale manufacturing will be a difficult challenge.

In short, there are reasons for optimism about the development of optical computing, but without overestimating its possibilities: when computer scientist Dror Feitelson published his book Optical Computing (MIT Press) in 1988, there was talk of a new field that was already beginning to reach maturity. More than 30 years later, “optical computing is still more of a promise than a mainstream technology,” the author tells OpenMind. And the challenges still to be overcome are compounded by another stumbling block: technological inertia. Feitelson recalls the warning issued in those days by IBM researcher Robert Keyes: with the enormous experience and accumulated investment in electronics that we already know, “practically any other technology would be unable to catch up.”

From <https://www.bbvaopenmind.com/en/technology/future/optical-computing-solving-problems-at-the-speed-of-light/>

 Optical computers light up the horizon

 

Optical chips will power our future datacenters and supercomputers. Electronic chips can now have a layer of optical components, like lasers and switches, added to it, to increase their computing power. Credit: Martijn Heck, Aarhus University

Since their invention, computers have become faster and faster, as a result of our ability to increase the number of transistors on a processor chip.

Today, your smartphone is millions of times faster than the computers NASA used to put the first man on the moon in 1969. It even outperforms the most famous supercomputers from the 1990s. However, we are approaching the limits of this electronic technology, and now we see an interesting development: light and lasers are taking over electronics in computers.

Processors can now contain tiny lasers and light detectors, so they can send and receive data through small optical fibres, at speeds far exceeding the copper lines we use now. A few companies are even developing optical processors: chips that use laser light and optical switches, instead of currents and electronic transistors, to do calculations.

So, let us first take a closer look at why our current technology is running out of steam. And then, of course, answer the main question: when can you buy that optical computer?

Moore's Law is dying

Computers work with ones and zeros for all their calculations and transistors are the little switches that make that happen. Current processor chips, or integrated circuits, consist of billions of transistors. In 1965, Gordon Moore, founder of Intel, predicted that the number of transistors per chip would double every two years. This became known as Moore's Law, and after more than half a century, it is still alive. Well, it appears to be alive...

In fact, we are fast reaching the end of this scaling. Transistors are now approaching the size of an atom, which means that quantum mechanical effects are becoming a bottleneck. The electrons, which make up the current, can randomly disappear from such tiny electrical components, messing up the calculations.

Moreover, the newest technology, where transistors have a size of only five nanometers, is now so complex that it might become too expensive to improve. A semiconductor fabrication plant for this five-nanometer chip technology, to be operational in 2020, has already cost a steep 17 billion US dollars to build.


Computer processor chips have plateaued

Looking more closely, however, the performance growth in transistors has been declining. Remember the past, when every few years faster computers hit the market? From 10 MHz clock speed in the 80s, to 100 MHz in the 90s and 1 GHz in 2000? That has stopped, and computers have been stuck at about 4 GHz for over 10 years.

Of course with smart chip design, for example using parallel processing in multi-core processors, we can still increase the performance, so your computer still works faster, but this increased speed is not due to the transistors themselves.

And these gains come at a cost. All those cores on the processor need to communicate with each other, to share tasks, which consumes a lot of energy. So much so that the communication on and between chips is now responsible for more than half of the total power consumption of the computer.

Since computers are everywhere, in our smartphone and laptop, but also in datacenters and the internet, this energy consumption is actually a substantial amount of our carbon footprint.

For example, there are bold estimations that intense use of a smartphone connected to the Internet consumes the same amount of energy as a fridge. Surprising, right? Do not worry about your personal electricity bill, though, as this is the energy consumed by the datacenters and networks. And the number and use of smartphones and other wearable tech keeps growing.

Fear not: lasers to the rescue

So, how can we reduce the energy consumption of our computers and make them more sustainable? The answer becomes clear when we look at the Internet.

In the past, we used electrical signals, going through copper wires, to communicate. The optical fibre, guiding laser light, has revolutionised communications, and has made the Internet what it is today: Fast and extending across the entire world. You might even have fibre all the way to your home.

We are using the same idea for the next generation computers and servers. No longer will the chips be plugged in on motherboards with copper lines, but instead we will use optical waveguides. These can guide light, just like optical fibres, and are embedded into the motherboard. Small lasers and photodiodes are then used to generate and receive the data signal. In fact, companies like Microsoft are already considering this approach for their cloud servers.

Optical chips are already a reality

Now I know what you're thinking around about now:

"But wait a second, how will these chips communicate with each other using light? Aren't they built to generate an electrical current?"

Yes, they are. Or, at least, they were. But interestingly, silicon chips can be adapted to include transmitters and receivers for light, alongside the transistors.

Researchers from the Massachusetts Institute of Technology in the US have already achieved this, and have now started a company (Ayar Labs) to commercialise the technology.

Here at Aarhus University in Denmark we are thinking even further ahead: If chips can communicate with each other optically, using laser light, would it not also make sense that the communication on a chip—between cores and transistors—would benefit from optics?

We are doing exactly that. In collaboration with partners across Europe, we are figuring out whether we can make more energy-efficient memory by writing the bits and bytes using laser light, integrated on a chip. This is very exploratory research, but if we succeed, it could change future chip technology as early as 2030.

The future: optical computers on sale in five years?

So far so good, but there is a caveat: Even though optics are superior to electronics for communication, they are not very suitable for actually carrying out calculations. At least, when we think binary—in ones and zeros.

Here the human brain may hold a solution. We do not think in a binary way. Our brain is not digital, but analogue, and it makes calculations all the time.

Computer engineers are now realising the potential of such analogues, or brain-like, computing, and have created a new field of neuromorphic computing, where they try to mimic how the human brain works using electronic chips.

And in turns out that optics are an excellent choice for this new brain-like way of computing.

The same kind of technology used by MIT and our team, at Aarhus University, to create optical communications between and on silicon chips, can also be used to make such neuromorphic optical chips.

In fact, it has already been shown that such chips can do some basic speech recognition. And two start-ups in the US, Lightelligence and Lightmatter, have now taken up the challenge to realise such optical chips for artificial intelligence.

Optical chips are still some way behind electronic chips, but we're already seeing the results and this research could lead to a complete revolution in computer power. Maybe in five years from now we will see the first optical co-processors in supercomputers. These will be used for very specific tasks, such as the discovery of new pharmaceutical drugs.

But who knows what will follow after that? In ten years these chips might be used to detect and recognise objects in self-driving cars and autonomous drones. And when you are talking to Apple's Siri or Amazon's Echo, by then you might actually be speaking to an optical computer.

While the 20th century was the age of the electron, the 21st century is the age of the photon – of light. And the future shines bright.

 

From <https://phys.org/news/2018-03-optical-horizon.html>

 

 For all discussed seminar topics list click here Index.

…till next post, bye-bye and take care.