Showing posts with label ECE. Show all posts
Showing posts with label ECE. Show all posts

Extreme Ultraviolet Lithography


Silicon has been the heart of the world's technology boom for nearly half a century, but microprocessor manufacturers have all but squeezed the life out of it. The current technology used to make microprocessors will begin to reach its limit around 2005. At that time, chipmakers will have to look to other technologies to cram more transistors onto silicon to create more powerful chips. Many are already looking at extreme-ultraviolet lithography (EUVL) as a way to extend the life of silicon at least until the end of the decade.

Potential successors to optical projection lithography are being aggressively developed. These are known as "Next-Generation Lithographies" (NGL's). EUV lithography (EUVL) is one of the leading NGL technologies; others include x-ray lithography, ion-beam projection lithography, and electron-beam projection lithography. Using extreme-ultraviolet (EUV) light to carve transistors in silicon wafers will lead to microprocessors that are up to 100 times faster than today's most powerful chips, and to memory chips with similar increases in storage capacity.

Energy transmission system for an artificial heart- leakage inductance compensation


The artificial heart now in use, like the natural heart it is designed to replace , is a four –chambered device for pumping blood. such electrical circulatory assist devices such as total artificial heart or ventricular assist devices generally use a brushless dc motor as their pump They require 12–35 W to operate and can be powered by a portable battery pack and a dc–dc converter.
It would be desirable to transfer electrical energy to these circulatory assist devices transcutaneously without breaking the skin. This technique would need a power supply which uses a transcutaneous transformer to drive  use,the motor for the circulatory assist devices. The secondary of this transformer would be implanted under the skin, and the primary would be placed on top of the secondary, external to the body. The distance between the transformer windings would be approximately equal to the thickness of the patient’s skin, nominally between 1–2 cm. This spacing cannot be assumed constant; the alignment of the cores and the distance between them would certainly vary during the operation.
A transformer with a large (1–2 cm) air gap between the primary and the secondary has large leakage inductances. In this application, the coupling coefficient k ranges approximately from 0.1 to 0.4. This makes the leakage inductances of the same order of magnitude and usually larger than the magnetizing inductance. Therefore, the transfer gain of voltage is very low, and a significant portion of the primary current will flow through the magnetizing inductance. The large circulating current through the magnetizing inductance results in poor efficiency.
A dc–dc converter employing secondary-side resonance has been reported to alleviate the problems by lowering the impedance of the secondary side using a resonant circuit .Although the circulating current is lowered, the transfer gain of the voltage varies widely as the coupling coefficient varies .So, advantages characteristics are reduced as the coupling coefficient deviates at a designated value.
In this paper, compensation of the leakage inductances on both sides of the transcutaneous transformer is presented. This converter offers significant improvements over the converter presented in the following aspects.

·         High-voltage gain with relative small variation with respect to load change as well as the variation of the coupling coefficient of the transformer—this reduces the operating frequency range and the size of the transcutaneous transformer is minimized.

·         Higher efficiency—minimize circulating current of magnetizing inductance and zero-voltage switching (ZVS) of the primary switches, and zero-current switching (ZCS) of the secondary rectifier diodes improves the efficiency significantly, especially at the secondary side (inside the body).

More ece and eee topics

Electronics Meet Animal Brains


Until recently, neurobiologists have used computers for simulation, data collection, and data analysis, but not to interact directly with nerve tissue in live, behaving animals. Although digital computers and nerve tissue both use voltage waveforms to transmit and process information, engineers and neurobiologists have yet to cohesively link the electronic signaling of digital computers with the electronic signaling of nerve tissue in freely behaving animals.
Recent advances in microelectromechanical systems (MEMS), CMOS electronics, and embedded computer systems will finally let us link computer circuitry to neural cells in live animals and, in particular, to reidentifiable cells with specific, known neural functions. The key components of such a brain-computer system include neural probes, analog electronics, and a miniature microcomputer. Researchers developing neural probes such as sub- micron MEMS probes, microclamps, microprobe arrays, and similar structures can now penetrate and make electrical contact with nerve cells with out causing significant or long-term damage to probes or cells.
Researchers developing analog electronics such as low-power amplifiers and analog-to-digital converters can now integrate these devices with micro- controllers on a single low-power CMOS die. Further, researchers developing embedded computer systems can now incorporate all the core circuitry of a modern computer on a single silicon chip that can run on miniscule power from a tiny watch battery. In short, engineers have all the pieces they need to build truly autonomous implantable computer systems.
Until now, high signal-to-noise recording as well as digital processing of real-time neuronal signals have been possible only in constrained laboratory experiments. By combining MEMS probes with analog electronics and modern CMOS computing into self-contained, implantable microsystems, implantable computers will free neuroscientists from the lab bench.

Embedded DRAM


Even though the word DRAM has been quite common among us for many decades, the development in the field of DRAM was very slow. The storage medium reached the present state of semiconductor after a long scientific research. Once the semiconductor storage medium was well accepted by all, plans were put forward to integrate the logic circuits associated with the DRAM along with the DRAM itself. However, technological complexities and economic justification for such a complex integrated circuit are difficult hurdles to overcome. Although scientific breakthroughs are numerous in the commodity DRAM industry, similar techniques are not always appropriate when high- performance logic circuits are included on the same substrate. Hence, eDRAM pioneers have begun to develop numerous integration schemes. Two basic integration philosophies for an eDRAM technology are:

  • Incorporating memory circuits in a technology optimized  for low-Cost high performance logic.
  • Incorporating logic circuits in a technology optimized for high- Density low performance DRAM.
This seemingly subtle semantic difference significantly impacts mask count, system performance, peripheral circuit complexity, and total memory capacity of eDRAM products. Furthermore, corporations With aggressive commodity DRAM technology do not have expertise in the design of complicated digital functions and are not able to assemble a design team to complete the task of a truly merged DRAM-logic product. Conversely, small application specific integrated circuit (ASIC) design corporations, unfamiliar with DRAM- specific elements and design practice, cannot carry out an efficient merged logic design and therefore mar the beauty of the original intent to integrate. Clearly, the reuse of process technology is an enabling lhetor en route to cost-effective eDRAM technology. By the same. account, modern circuit designers should be familiar with the new elements of eDRAM technology so that they can efficiently reuse DRAM-specific structures and elements in other digital functions. The reuse of additional electrical elements is a methodology that will make eDRAM more than just a memory’ interconnected to a few million Boolean gates.
In the following sections of this report the DRAM applications and architectures that are expected to form the basis of eDRAM products are reviewed. Then a description of elements found in generic eDRAM technologies is presented so that non-memory-designers can become familiar with eDRAM specific elements and technology. Various technologies used in eDRAM are discussed. An example of eDRAM is also discussed towards the end of the report.
It can be clearly seen from this report that embedded DRAM macro extends the on-chip capacity to more than 40 MB, allowing historically off-chip memory to be integrated on chip and enabling System-on-a-Chip (SoC) designs. ‘By these memory integrated, on chips, the bandwidth is increased to a high , extend. A highly integrated DRAM approach also simplifies board design, hereby reducing overall system cost and time to market. Even, more importantly, embedding DRAM enables higher bandwidth by allowing a wider on-Chip buss and saves power by eliminating DRAM I/O.

Electrical and chemical diagnostics of transformer insulation


The main function of a power system is to supply electrical energy to its customers with an acceptable degree of reliability and quality. Among many other things, the reliability of a power system depends on trouble free transformer operation. Now, in the electricity utilities around the world, a significant number of power transformers are operating beyond their design life. Most of these transformers are operating without evidence of distress. The same situation is evident in Australia. In PowaaerLink Queensland (PLQ), 25% of the power transformers were more than 25 years old in 1991. So priority attention should be directed to research into improved diagnostic techniques for determining the condition of the insulation in aged transformers.
The insulation system in a power transformer consists of cellulosic materials (paper, pressboard and transformerboard) and processed mineral oil. The cellulosic materials and oil insulation used in transformer degrade with time. The degradation depends on thermal, oxidative, hydrolytic, electrical and mechanical conditions which the transformer experienced during its lifetime.
The condition of the paper and pressboard insulation has been monitored by (a) bulk measurements (dissolved gas analysis (DGA) insulation resistance (IR), tanö and furans and (b) measurements on samples removed from the transformer (degree of polymerization (DP) tensile strength). At the interface between the paper and oil in the transformer, interfacial polarization may occur, resulting in an increase in the loss tangent and dielectric loss. A DC method was developed for measuring the interfacial polarization spectrum for the determination of insulation condition in aged transformers.
This paper makes contributions to the determination of the insulation condition of transformers by bulk measurements and measurements on samples removed from the transformer. It is based on a University of Queensland research project conducted with cooperation from the PLQ and the GEC-Alsthom.
Most of the currently used techniques have some drawbacks. Dissolved gas analysis requires a data bank based on experimental results from failed transformers for predicting the fault type. When transformer oil is rep or refurbished, the analysis of furans in the refurbished oil may not show any trace of degradation, although the cellulose may have degraded significantly. DP estimation is based on a single-point viscosity measurement. Molecular weight studies by single-point viscosity measurements are of limited value when dealing with a complex polymer blend, such as Kraft paper, particularly in cases where the molecular weight distribution of the paper changes significantly as the degradation proceeds. In these instances, a new technique, gel permeation chromatography (GPC), is likely to be more useful than the viscosity method, because it provides information about the change in molecular weight and molecular weight distribution. Investigation of the GPO technique has been included in this research to assess its effectiveness in determining the condition of insulation.
Conventional electrical properties (dissipation factor and breakdown strengths) of cellulosic materials are not significantly affected by ageing .so very little recent research has been directed to electrical diagnostic techniques, in this research project, thorough investigations were also undertaken of the conventional electrical properties, along with interfacial polarization parameters of the cellulosic insulation materials. The interfacial phenomena are strongly influenced by insulation degradation products, such as polar functionalities, water etc. The condition of the dielectric and its degradation due to ageing can be monitored by studying the rate and process of polarization and can be studied using a DC field. Furthermore, this is a non-destructive diagnostic test.
A retired power transformer (25 MVA, l1/132 kV) and several distribution transformers were used for the experimental work. The results from these transformers will be presented and an attempt will be made to correlate the electrical and chemical test results. The variation of the results through the different locations in a power transformer will be discussed with reference to their thermal stress distribution. Accelerated ageing experiments were conducted to predict the long term insulation behaviour and the results are presented in the accompanying paper.

DISTRIBUTED WIRELESS COMMUNICATION SYSTEM


With the rapid progress in telecommunications, more and more services are provided on the basis of broadband communications, such as video services and high-speed Internet. With worldwide fundamental construction of a backbone network based on optical fiber providing almost unlimited communications capability, the limited throughput of the subscriber loop becomes one of the most stringent bottlenecks.Compared to the capacity of the backbone network, which is measured by tens of gigabits per second, the throughput of the subscriber loop is much lower, only up to hundreds of megabits
per second for wired systems (including fixed wireless access). However, for mobile access the throughput is even lower, and depends on the mobility of the terminal. For example, the peak data rate is only 2 Mb/s for 3G systems.
Since there will be more and more need for mobile services, the poor throughput of mobile access not only limits user applications based on interconnection, but also wastes the capability of the backbone network. This case is quite similar to the traffic conditions shown in Fig. a, which is an image of an ultra-wide expressway with a few narrow entrances.
Since the little paths are rough,narrow, and crowded, the problems in Fig. a are:
§  Terminals are far away from the expressway, which will consume much  power.
§  Too many cars converge into the same narrow paths.
§  Little paths converge several times before  going into the expressway.
§  The expressway is used insufficiently, since few cars are running on it.
§  In telecommunications, the optical fiber network (expressway) is relatively much cheaper than the wireless spectrum (little paths), while the capability of the former is much greater than that of the later. As shown in Fig. b, besides the backbone expressway, there are some dedicated subexpressways used to provide direct entrance for distributed subscribers.The above example implies that the high-capacity wired network, being so cheap, can help us solve the problem of wireless access(too many users crowded in a very narrow bandwidth). The key issue is to provide each mobile user a direct or one-hop connection to an optical network.This structure also follows the trend in network evolution: the hierarchical or tree-like structure of traditional networks will be gradually flattened to simple single-layer ones.

Digital water marking


In recent years, the distribution of works of art, including pictures, music, video and textual documents, has become easier. With the widespread and increasing use of the Internet, digital forms of these media (still images, audio, video, text) are easily accessible. This is clearly advantageous, in that it is easier to market and sell one's works of art. However, this same property threatens copyright protection. Digital documents are easy to copy and distribute, allowing for pirating. There are a number of methods for protecting ownership. One of these is known as digital watermarking.
Digital watermarking is the process of inserting a digital signal or pattern (indicative of the owner of the content) into digital content. The signal, known as a watermark, can be used later to identify the owner of the work, to authenticate the content, and to trace illegal copies of the work.
Watermarks of varying degrees of obtrusiveness are added to presentation media as a guarantee of authenticity, quality, ownership, and source.
To be effective in its purpose, a watermark should adhere to a few requirements. In particular, it should be robust, and transparent. Robustness requires that it be able to survive any alterations or distortions that the watermarked content may undergo, including intentional attacks to remove the watermark, and common signal processing alterations used to make the data more efficient to store and transmit. This is so that afterwards, the owner can still be identified. Transparency requires a watermark to be imperceptible so that it does not affect the quality of the content, and makes detection, and therefore removal, by pirates less possible.
The media of focus in this paper is the still image. There are a variety of image watermarking techniques, falling into 2 main categories, depending on in which domain the watermark is constructed: the spatial domain (producing spatial watermarks) and the frequency domain (producing spectral watermarks). The effectiveness of a watermark is improved when the technique exploits known properties of the human visual system. These are known as perceptually based watermarking techniques. Within this category, the class of image-adaptive watermarks proves most effective.
In conclusion, image watermarking techniques that take advantage of properties of the human visual system, and the characteristics of the image create the most robust and transparent watermarks.

Delay torrent network


Consider a scientist who is responsible for the operation of robotic meteorological station located on the planet Mars (Fig. 1). The weather station is one of several dozen instrument platforms that communicate among themselves via a wireless local area network deployed on the Martian surface. The scientist wants to upgrade the software in the weather station’s data management computer by installing and dynamically loading a large new module. The module must be transmitted first from the scientist’s workstation to a deep space antenna complex, then form the antenna complex to a constellation of relay satellites in low Mars orbit (no one of which is visible from Earth ling enough on any single orbit to receive the entire module), and finally from the relay satellites to the weather station.
The first leg of this journey would typically be completed using the TCP/IP protocol suite over the Internet, where electronic communication is generally characterized by:

·         Relatively small signal propagation latencies (on the order of milliseconds)
·         Relatively high data rates (up to 40 Gb/s for OC-768 service)
·         Bidirectional communication on each connection
·         Continuous end-to-end connectivity
·         On-demand network access with high potential for congestion

However, for the second leg a different protocol stack would be necessary. Electronic communication between a tracking station and a robotic spacecraft in deep space is generally characterized by:

·         Very large signal propagation latencies (on the order of minutes; Fig. 2)
·         Relatively low data rates (typically 8-256 kb/s)
·         Possibly time-disjoint periods of reception and transmission, due to orbital mechanics and/or spacecraft operational policy
·         Intermittent scheduled connectivity
·         Centrally managed access to the communication channel with essentially no potential for congestion

Digital audio broad casting


Digital audio broadcasting, DAB, is the most fundamental advancement in radio technology since that introduction of FM stereo radio. It gives listeners interference — free reception of CD quality sound, easy to use radios, and the potential for wider listening choice through many additional stations and services.
DAB is a reliable multi service digital broadcasting system for reception by mobile, portable and fixed receivers with a simple, non-directional antenna. It can be operated at any frequency from 30 MHz to 3GHz for mobile reception (higher for fixed reception) and may be used on terrestrial, satellite, hybrid (satellite with complementary terrestrial) and cable broadcast networks.
DAB system is a rugged, high spectrum and power efficient sound and data broadcasting system. It uses advanced digital audio compression techniques (MPEG 1 Audio layer II and MPEG 2 Audio Layer II) to achieve a spectrum efficiency equivalent to or higher than that of conventional FM radio.
The efficiency of use of spectrum is increased by a special feature called Single. Frequency Network (SFN). A broadcast network can be extended virtually without limit a operating all transmitters on the same radio frequency.

DNA computer


Computer chip manufacturers are furiously racing to make the next microprocessor that will topple speed records. Sooner or later, though, this competition is bound to hit a wall. Microprocessors made of silicon will eventually reach their limits of speed and miniaturization. Chip makers need a new material to produce faster computing speeds.
 Millions of natural supercomputers exist inside living organisms, including your body. DNA (deoxyribonucleic acid) molecules, the material our genes are made of, have the potential to perform calculations many times faster than the world's most powerful human-built computers. DNA might one day be integrated into a computer chip to create a so-called biochip that will push computers even faster. DNA molecules have already been harnessed to perform complex mathematical problems.
While still in their infancy, DNA computers will be capable of storing billions of times more data than your personal computer. DNA can be used to calculate complex mathematical problems. However, this early DNA computer is far from challenging silicon-based computers in terms of speed. The Rochester team's DNA logic gates are the first step toward creating a computer that has a structure similar to that of an electronic PC. Instead of using electrical signals to perform logical operations, these DNA logic gates rely on DNA code. They detect fragments of genetic material as input, splice together these fragments and form a single output

SMART PIXEL ARRAYS


Smart pixels, the integration of photo detector arrays and processing electronics on a single semiconductor chip, have been driven by its capability to perform parallel processing of large pixelated images and in real-time reduce a complex image into a manageable stream of signals that can be brought off-chip. In recent years, optical modulators and emitters have been integrated with photo detectors and on-chip electronics. The potential uses for smart pixels are almost as varied as are the designs. They can be used for image processing, data processing, communications, and that special sub-niche of communications, computer networking. While no immediate commercial use for smart pixels has risen to the forefront, smart pixels systems are utilizing technology developed for a wide variety of other commercial applications. As lasers, video displays, optoelectronics and other related technologies continue to progress, it is inevitable that smart pixels will continue to integrate along with these commercially successful technologies.
The name smart pixel is combination of two ideas, "pixel" is an image processing term denoting a small part, or quantized fragment of an image, the word "smart" is coined from standard electronics and reflects the presence of logic circuits. Together they describe a myriad of devices. These smart pixels can be almost entirely optical in nature, perhaps using the non-linear optical properties of a material to manipulate optical data, or they can be mainly electronic, for instance a photo receiver coupled with some electronic switching.

Crusoe


Mobile computing has been the buzzword for quite a long time. Mobile computing devices like laptops, web slates & notebook PCs are becoming common nowadays. The heart of every PC whether a desktop or mobile PC is the microprocessor. Several microprocessors are available in the market for desktop PCs from companies like Intel, AMD, and Cyrix etc. The mobile computing market has never had a microprocessor specifically designed for it. The microprocessors used in mobile PCs are optimized versions of the desktop PC microprocessor. Mobile computing makes very different demands on processors than desktop computing, yet up until now, mobile x86 platforms have simply made do with the same old processors originally designed for desktops. Those processors consume lots of power, and they get very hot. When you're on the go, a power-hungry processor means you have to pay a price: run out of power before you've finished, run more slowly and lose application performance, or run through the airport with pounds of extra batteries. A hot processor also needs fans to cool it; making the resulting mobile computer bigger, clunkier and noisier. A newly designed microprocessor with low power consumption will still be rejected by the market if the performance is poor. So any attempt in this regard must have a proper 'performance-power' balance to ensure commercial success. A newly designed microprocessor must be fully x86 compatible that is they should run x86 applications just like conventional x86 microprocessors since most of the presently available software’s have been designed to work on x86 platform.
Crusoe is the new microprocessor which has been designed specially for the mobile computing market. It has been designed after considering the above mentioned constraints. This microprocessor was developed by a small Silicon Valley startup company called Transmeta Corp. after five years of secret toil at an expenditure of $100 million. The concept of Crusoe is well understood from the simple sketch of the processor architecture, called 'amoeba’. In this concept, the x86-architecture is an ill-defined amoeba containing features like segmentation, ASCII arithmetic, variable-length instructions etc. The amoeba explained how a traditional microprocessor was, in their design, to be divided up into hardware and software.
Thus Crusoe was conceptualized as a hybrid microprocessor that is it has a software part and a hardware part with the software layer surrounding the hardware unit. The role of software is to act as an emulator to translate x86 binaries into native code at run time. Crusoe is a 128-bit microprocessor fabricated using the CMOS process. The chip's design is based on a technique called VLIW to ensure design simplicity and high performance. Besides this it also uses Transmeta's two patented technologies, namely, Code Morphing Software and Longrun Power Management. It is a highly integrated processor available in different versions for different market segments.

CompactPCI


Compact peripheral component interconnect (CPCI) is an adaptation of the peripheral component interconnect (PCI) specification for industrial computer applications requiring a smaller, more robust mechanical form factor than the one defined for the desktop. CompactPCI is an open standard supported by the PCI Industrial Computer Manufacturer’s Group (PICMG). CompactPCI is best suited for small, high-speed industrial computing applications where transfers occur between a number of high-speed cards.
It is a high-performance industrial bus that uses the Eurocard form factor and is fully compatible with the Enterprise Computer Telephony Forum(ECTF) computer telephony (CT) Bus™ H.110 standard specification. CompactPCI products make it possible for original equipment manufacturers (OEM), integrators, and resellers to build powerful and cost-effective solutions for telco networks, while using fewer development resources. CompactPCI products let developers scale their applications to the size, performance, maintenance, and reliability demands of telco environments by supporting the CT Bus, hot swap, administrative tools such as simple network management protocol (SNMP), and extensive system diagnostics. The move toward open, standards-based systems has revolutionized the computer telephony (CT) industry. There are a number of reasons for these changes. Open systems have benefited from improvements in personal computer (PC) hardware and software, as well as from advances in digital signal processing (DSP) technology. As a result, flexible, high performance systems are scalable to thousands of ports while remaining cost effective for use in telco networks. In addition, fault-tolerant chassis, distributed software architecture, and N+1 redundancy have succeeded in meeting the demanding reliability requirements of network operators.
One of the remaining hurdles facing open CT systems is serviceability. CT systems used in public networks must be extremely reliable and easy to repair without system downtime. In addition, network operation requires first-rate administrative and diagnostic capabilities to keep services up and running.

Circuit and Safety Analysis System (CSAS)


Success is all about being in the right place at the right time ….. and the axiom is a guiding principle for designers of  motorsport circuits. To avoid problems you need know where and when things are likely to go wrong before cars turn a wheel –and anticipating accidents is a science.
Take barriers, for example .there is little point erecting them in the wrong place –but predicting the right place is a black art. The FIA has developed bespoke software, the Circuit and Safety Analysis System (CSAS), to predict problem areas on F1 circuits.
Where and when cars leave circuits is due to the complex interaction between their design, the driver’s reaction and the specific configuration of the track, and the CSAS allows the input of many variables-lap speeds ,engine power curves, car weight changes, aerodynamic characteristics etc –to predict how cars may leave the circuit at particular places. The variables are complex. The impact point of a car continuing in a straight line at a corner is easy to predict, but if the driver has any remaining control and alters the car’s trajectory, or if a mechanical fault introduces fresh variables, its final destination is tricky to model.
Modern tyre barriers are built of road tyres with plastic tubes sandwiched between them. The side facing the track is covered with conveyor belting to prevent wheels becoming snagged and distorting the barrier. The whole provides a deformable ‘cushion’ a principle that has found its way to civilian roads. Barriers made of air filled cells, currently under investigation may be the final answer. Another important safety factor is the road surface. Racing circuits are at the cutting edge of surface technology, experimenting with new materials for optimum performance.

Blue ray DVD


Tokyo Japan, February 19, 2002: Nine leading companies today announced that they have jointly established the basic specifications for a next generation large capacity optical disc video recording format called "Blu-ray Disc". The Blu-ray Disc enables the recording, rewriting and play back of up to 27 gigabytes (GB) of data on a single sided single layer 12cm CD/DVD size disc using a 405nm blue-violet laser.
By employing a short wavelength blue violet laser, the Blu-ray Disc successfully minimizes its beam spot size by making the numerical aperture (NA) on a field lens that converges the laser 0.85. In addition, by using a disc structure with a 0.1mm optical transmittance protection layer, the Blu-ray Disc diminishes aberration caused by disc tilt. This also allows for disc better readout and an increased recording density. The Blu-ray Disc's tracking pitch is reduced to 0.32um, almost half of that of a regular DVD, achieving up to 27 GB high-density recording on a single sided disc.
Because the Blu-ray Disc utilizes global standard "MPEG-2 Transport Stream" compression technology highly compatible with digital broadcasting for video recording, a wide range of content can be recorded. It is possible for the Blu-ray Disc to record digital high definition broadcasting while maintaining high quality and other data simultaneously with video data if they are received together. In addition, the adoption of a unique ID written on a Blu-ray Disc realizes high quality copyright protection functions.
The Blu-ray Disc is a technology platform that can store sound and video while maintaining high quality and also access the stored content in an easy-to-use way. This will be important in the coming broadband era as content distribution becomes increasingly diversified. The nine companies involved in the announcement will respectively develop products that take full advantage of Blu-ray Disc's large capacity and high-speed data transfer rate. They are also aiming to further enhance the appeal of the new format through developing a larger capacity, such as over 30GB on a single sided single layer disc and over 50GB on a single sided double layer disc. Adoption of the Blu-ray Disc in a variety of applications including PC data storage and high definition video software is being considered.

Optical Computing Technology


Optical computing was a hot research area in 1980.But the work tapered off due to materials limitations that prevented optochips from getting small enough and cheap enough beyond laboratory curiosities. Now, optical computers are back with advances in self-assembled conducting organic polymers that promise super-tiny of all optical chips.

Optical computing technology is, in general, developing in two directions. One approach is to build computers that have the same architecture as present day computers but using optics that is Electro optical hybrids. Another approach is to generate a completely new kind of computer, which can perform all functional operations in optical mode. In recent years, a number of devices that can ultimately lead us to real optical computers have already been manufactured. These include optical logic gates, optical switches, optical interconnections and optical memory.

Current trends in optical computing emphasize communications, for example the use of free space optical interconnects as a potential solution to remove experienced in electronic architectures. Optical technology is one of the most promising, and may eventually lead to new computing applications as a consequence of faster processing speed, as well as better connectivity and higher bandwidth

BLAST

The explosive growth of both the wireless industry and the Internet is creating a huge market opportunity for wireless data access. Limited internet access, at very low speeds, is already available as an enhancement to some existing cellular systems. However those systems were designed with purpose of providing voice services and at most short messaging, but not fast data transfer. Traditional wireless technologies are not very well suited to meet the demanding requirements of providing very high data rates with the ubiquity, mobility and portability characteristics of cellular systems. Increased use of antenna arrays appears to be the only means of enabling the type of data rates and capacities needed for wireless internet and multimedia services. While the deployment of  base station arrays is becoming universal it is really the simultaneous deployment of base station and terminal arrays that can unleash unprecedented  levels of performance by opening up multiple spatial signaling dimensions .Theoretically, user data rates as high as 2 Mb/sec will be supported in certain environments, although recent studies have shown that approaching those might only be feasible under extremely favorable conditions-in the vicinity of the base station and with no other users competing for band width. Some fundamental barriers related to the nature of radio channel as well as to the limited band width availability at the frequencies of interest stand in the way of high data rates and low cost associated with wide access.

Cheapest Car Tracking System


The project includes the development of the software and hardware for a Micro controller based tracking system based on the technique “Dromography”. Here ‘dromos' means [way, street, route, corridor] + 'graphos' [to write], so Dromography is the study of geography and logistics of transportation, movement and communication routes. The digital dashboard will display the speed, distance traveled, engine temperature, mileage and fuel level of the vehicle. All the parameters are displayed in 16x1 LCD
Here the project comes under the embedded systems, which is the combination of software and hardware and perhaps some sensors, which are used to perform a specific tracing function. The project is done using Microchip’s Microcontroller PIC16F877. Micro controller programming is done using assembly language in Microchip’s integrated development environment MPLAB version 5.60
 The hardware part keeps monitoring the speed and the direction of the vehicle. The hardware sends the data to computer via serial port and track is plotted with the help of interface software. Again the hardware section can be divided in to microcontroller, input unit, output unit, and the display unit .The software section have the microcontroller coding and the user interface window programming. Using Visual Basic language the window is developed. The route is traced on this window. Some additional push buttons such as SAVE, LOAD and CLEAR are also provided to perform some useful tasks.
This project is very helpful to make a map to plot and retrace the route of a vehicle. And it also take care of displaying speed, distance, engine temperature and mileage. This is very useful method for town planning and road survey .Again these maps are also used to trace our position in the map. Being an independent gadget it works accurately with a high precision.
In this technological era many advanced tracking systems are available in market .Which are mostly based on Global Positioning System. But if anyone wants to enjoy these facilities, they have to pay a lot for the receiver and other service charges. Also there are many other limitations like geographical position of the place, satellite coverage etc, which affects the service.
Here the necessity comes to develop an efficient tracking system, with a least cost. The system should work effectively any ware, having user friendly interaction with the facility to store and load the data which are collected previously.

Biometric Fingerprint Identification


Positive identification of individuals is a very basic societal requirement. Reliable user authentication is becoming an increasingly important task in the web –enabled  world. The  consequences  of  an  insecure  authentication  system  in  a corporate or enterprise environment can be catastrophic, and may include loss of confidential information, denial of service, and compromised  data integrity. The value of  reliable user  authentication is  not limited to  just computer  or  network access. Many other applications in every day life also require user authentication, such as banking, e-commerce, and could benefit from enhanced security.
In fact, as more interactions take electronically, it becomes even more important to have an electronic verification of a person’s identity. Until recently, electronic verification took one of two forms. It was based on something the person had in their possession, like a magnetic swipe card, or something they knew, like a password. The problem is, these forms of electronic identification are not very secure, because they can be given away, taken away, or lost and motivated people have found ways to forge or circumvent these credentials.
The ultimate form of electronic verification of a person’s is biometrics. Biometrics refers to the automatic identification of a person based on his/her physiological or behavioral characteristics such as finger scan, retina, iris, voice scan, signature scan etc. By using this technique physiological characteristics of a person can be changed into electronic processes that are inexpensive and easy to use. People have always used the brain’s innate ability to recognize a familiar face and it has long been known that a person’s fingerprints can be used for identification.
IDENTIFICATION AND VERIFICATION SYSTEMS
A person’s identity can be resolved in two ways: identification and verification. The former involves identifying a person from all biometric measurements collected in a database and this involves a one-to-many match also referred to as ‘cold search’. “Do I know who you are”? is the inherent question this process seeks to answer. Verification involves authenticating a person’s claimed identity from his or her previously enrolled pattern and this involves a one to one match. The question it seeks to answer is, “Are you claim to be?”
VERIFICATION
Verification involves comparing a person’s fingerprint to one that pass previously recorded in the system database. The person claiming an identity provided a fingerprint, typically by placing on a capacitance scanner or an optical scanner. The computer locates the previous fingerprint by looking at the person’s identity. This process is relatively easy because the computer needs to compare two fingerprint records. The verification process is referred as a ‘closed search’ because the search field is limited. The second question is “who is this person?” This is the identification function, which is used to prevent duplicate application or enrollment. In this case a newly supplied fingerprint is supplied to all others in the database.  A match indicates that the person has already enrolled/applied.
IDENTIFICATION
The identification process, also known as an ‘open search’, is much more technically demanding. It involves many more comparisons and may require differentiating among several database fingerprints that are similar to the objects.

Animatronics


The first use of Audio-Animatronics was for Walt Disney's Enchanted Tiki Room in Disneyland, which opened in June, 1963. The Tiki birds were operated using digital controls; that is, something that is either on or off. Tones were recorded onto tape, which on playback would cause a metal reed to vibrate. The vibrating reed would close a circuit and thus operate a relay. The relay sent a pulse of energy (electricity) to the figure's mechanism which would cause a pneumatic valve to operate, which resulted in the action, like the opening of a bird's beak. Each action (e.g., opening of the mouth) had a neutral position, otherwise known as the "natural resting position" (e.g., in the case of the Tiki bird it would be for the mouth to be closed). When there was no pulse of energy forthcoming, the action would be in, or return to, the natural resting position.
This digital/tone-reed system used pneumatic valves exclusively--that is, everything was operated by air pressure. Audio-Animatronics' movements that were operated with this system had two limitations. First, the movement had to be simple--on or off. (e.g., The open and shut beak of a Tiki bird or the blink of an eye, as compared to the many different positions of raising and lowering an arm.) Second, the movements couldn't require much force or power. (e.g., The energy needed to open a Tiki Bird's beak could easily be obtained by using air pressure, but in the case of lifting an arm, the pneumatic system didn't provide enough power to accomplish the lift.) Walt and WED knew that this this pneumatic system could not sufficiently handle the more complicated shows of the World's Fair. A new system was devised.
In addition to the digital programming of the Tiki show, the Fair shows required analog programming. This new "analog system" involved the use of voltage regulation. The tone would be on constantly throughout the show, and the voltage would be varied to create the movement of the figure. This "varied voltage" signal was sent to what was referred to as the "black box." The black boxes had the electronic equipment that would receive the signal and then activate the pneumatic and hydraulic valves that moved the performing figures. The use of hydraulics allowed for a substantial increase in power, which was needed for the more unwieldy and demanding movements. (Hydraulics were used exclusively with the analog system, and pneumatics were used only with the tone-reed/digital system.)
There were two basic ways of programming a figure. The first used two different methods of controlling the voltage regulation. One was a joystick-like device called a transducer, and the other device was a potentiometer (an instrument for measuring an unknown voltage or potential difference by comparison to a standard voltage--like the volume control knob on a radio or television receiver). If this method was used, when a figure was ready to be programmed, each individual action--one at a time-- would be refined, rehearsed, and then recorded. For instance, the programmer, through the use of the potentiometer or transducer, would repeatedly rehearse the gesture of lifting the arm, until it was ready for a "take." This would not include finger movement or any other movements, it was simply the lifting of an arm. The take would then be recorded by laying down audible sound impulses (tones) onto a piece of 35 mm magnetic film stock. The action could then instantly be played back to see if it would work, or if it had to be redone. (The machines used for recording and playback were the 35 mm magnetic units used primarily in the dubbing process for motion pictures. Many additional units that were capable of just playback were also required for this process. Because of their limited function these playback units were called "dummies.")
Eventually, there would be a number of actions for each figure, resulting in an equal number of reels of 35 mm magnetic film (e.g., ten actions would equal ten reels). All individual actions were then rerecorded onto a single reel--up to ten actions, each activated by a different tone, could be combined onto a single reel. For each action/reel, one dummy was required to play it back. Thus for ten actions, ten playback machines and one recording machine were required to combine the moves onto a new reel of 35 mm magnetic film.
"Sync marks" (synchronization points) were placed at the front end of each individual action reel and all of the dummies were interlocked. This way, during the rerecording, all of the actions would start at the proper time. As soon as it was finished, the new reel could be played back and the combined actions could be studied. Wathel, and often times Marc Davis (who did a lot of the programming and animation design for the Carousel show) would watch the figure go through the motions of the newly recorded multiple actions. If it was decided that the actions didn't work together, or something needed to be changed, the process was started over; either by rerecording the individual action, or by combining the multiple actions again. If the latter needed to be done, say the "arm lift action" came in too early, it would be accomplished by unlocking the dummy that had the "arm-lift reel" on it. The film would then be hand cranked, forward or back, a certain number of frames, which changed the start time of the arm lift in relation to the other actions. The dummies would be interlocked, and the actions, complete with new timing on the arm lift, would be recorded once again.
With this dummy system, the dialogue and music could also be interlocked and synched-up with the actions. Then the audio could be listened to as the figure went through the actions. This was extremely helpful in getting the gestures and actions to match the dialogue.
The other method used for programming a figure was the control harness. It was hooked up so that it would control the voltage regulation relative to the movements of the harness. Wathel tells horror stories of sitting in the harness for hours upon end, trying to keep every movement in his body to a minimum, except for the several movements they wanted for the figure. This method had the advantage of being able to do several actions at once, but obviously due to the complexities, a great deal of rehearsal was required.
There was also a harness for the mouth movements. Ken O'Brien, who was responsible for programming most of the mouth movements, used a transducer at first for the mouth programming. Later they designed a harness for his head that controlled the movement of the jaw," remembered Gordon Williams, recording engineer on the AA figures for the Fair. "It was easier for him to coordinate the movement, because he could watch the movement at the same time that he was doing it."