Blogger Themes

Thursday 19 July 2012

Engineers develop an ‘intelligent co-pilot’ for cars

Engineerblogger
July 19, 2012








Barrels and cones dot an open field in Saline, Mich., forming an obstacle course for a modified vehicle. A driver remotely steers the vehicle through the course from a nearby location as a researcher looks on. Occasionally, the researcher instructs the driver to keep the wheel straight — a trajectory that appears to put the vehicle on a collision course with a barrel. Despite the driver’s actions, the vehicle steers itself around the obstacle, transitioning control back to the driver once the danger has passed.

The key to the maneuver is a new semiautonomous safety system developed by Sterling Anderson, a PhD student in MIT’s Department of Mechanical Engineering, and Karl Iagnemma, a principal research scientist in MIT’s Robotic Mobility Group.

The system uses an onboard camera and laser rangefinder to identify hazards in a vehicle’s environment. The team devised an algorithm to analyze the data and identify safe zones — avoiding, for example, barrels in a field, or other cars on a roadway. The system allows a driver to control the vehicle, only taking the wheel when the driver is about to exit a safe zone.

Anderson, who has been testing the system in Michigan since last September, describes it as an “intelligent co-pilot” that monitors a driver’s performance and makes behind-the-scenes adjustments to keep the vehicle from colliding with obstacles, or within a safe region of the environment, such as a lane or open area.

“The real innovation is enabling the car to share [control] with you,” Anderson says. “If you want to drive, it’ll just … make sure you don’t hit anything.”

The group presented details of the safety system recently at the Intelligent Vehicles Symposium in Spain.

Off the beaten path

Robotics research has focused in recent years on developing systems — from cars to medical equipment to industrial machinery — that can be controlled by either robots or humans. For the most part, such systems operate along preprogrammed paths.

As an example, Anderson points to the technology behind self-parking cars. To parallel park, a driver engages the technology by flipping a switch and taking his hands off the wheel. The car then parks itself, following a preplanned path based on the distance between neighboring cars.

While a planned path may work well in a parking situation, Anderson says when it comes to driving, one or even multiple paths is far too limiting.

“The problem is, humans don’t think that way,” Anderson says. “When you and I drive, [we don’t] choose just one path and obsessively follow it. Typically you and I see a lane or a parking lot, and we say, ‘Here is the field of safe travel, here’s the entire region of the roadway I can use, and I’m not going to worry about remaining on a specific line, as long as I’m safely on the roadway and I avoid collisions.’”

Anderson and Iagnemma integrated this human perspective into their robotic system. The team came up with an approach to identify safe zones, or “homotopies,” rather than specific paths of travel. Instead of mapping out individual paths along a roadway, the researchers divided a vehicle’s environment into triangles, with certain triangle edges representing an obstacle or a lane’s boundary.

The researchers devised an algorithm that “constrains” obstacle-abutting edges, allowing a driver to navigate across any triangle edge except those that are constrained. If a driver is in danger of crossing a constrained edge — for instance, if he’s fallen asleep at the wheel and is about to run into a barrier or obstacle — the system takes over, steering the car back into the safe zone.

Building trust

So far, the team has run more than 1,200 trials of the system, with few collisions; most of these occurred when glitches in the vehicle’s camera failed to identify an obstacle. For the most part, the system has successfully helped drivers avoid collisions.

Benjamin Saltsman, manager of intelligent truck vehicle technology and innovation at Eaton Corp., says the system has several advantages over fully autonomous variants such as the self-driving cars developed by Google and Ford. Such systems, he says, are loaded with expensive sensors, and require vast amounts of computation to plan out safe routes.

"The implications of [Anderson's] system is it makes it lighter in terms of sensors and computational requirements than what a fully autonomous vehicle would require," says Saltsman, who was not involved in the research. "This simplification makes it a lot less costly, and closer in terms of potential implementation."

In experiments, Anderson has also observed an interesting human response: Those who trust the system tend to perform better than those who don’t. For instance, when asked to hold the wheel straight, even in the face of a possible collision, drivers who trusted the system drove through the course more quickly and confidently than those who were wary of the system.

And what would the system feel like for someone who is unaware that it’s activated? “You would likely just think you’re a talented driver,” Anderson says. “You’d say, ‘Hey, I pulled this off,’ and you wouldn’t know that the car is changing things behind the scenes to make sure the vehicle remains safe, even if your inputs are not.”

He acknowledges that this isn’t necessarily a good thing, particularly for people just learning to drive; beginners may end up thinking they are better drivers than they actually are. Without negative feedback, these drivers can actually become less skilled and more dependent on assistance over time. On the other hand, Anderson says expert drivers may feel hemmed in by the safety system. He and Iagnemma are now exploring ways to tailor the system to various levels of driving experience.

The team is also hoping to pare down the system to identify obstacles using a single cellphone. “You could stick your cellphone on the dashboard, and it would use the camera, accelerometers and gyro to provide the feedback needed by the system,” Anderson says. “I think we’ll find better ways of doing it that will be simpler, cheaper and allow more users access to the technology.”

This research was supported by the United States Army Research Office and the Defense Advanced Research Projects Agency. The experimental platform was developed in collaboration with Quantum Signal LLC with assistance from James Walker, Steven Peters and Sisir Karumanchi.

Source: MIT News

Autonomous robot scans ship hulls for mines

MIT News
July 17, 2012


Algorithms developed by MIT researchers enable an autonomous underwater vehicle (AUV) to swim around and reconstruct a ship's propeller. 
Image: Franz Hover, Brendan Englo

For years, the U.S. Navy has employed human divers, equipped with sonar cameras, to search for underwater mines attached to ship hulls. The Navy has also trained dolphins and sea lions to search for bombs on and around vessels. While animals can cover a large area in a short amount of time, they are costly to train and care for, and don’t always perform as expected.

In the last few years, Navy scientists, along with research institutions around the world, have been engineering resilient robots for minesweeping and other risky underwater missions. The ultimate goal is to design completely autonomous robots that can navigate and map cloudy underwater environments — without any prior knowledge of those environments — and detect mines as small as an iPod.

Now Franz Hover, the Finmeccanica Career Development Associate Professor in the Department of Mechanical Engineering, and graduate student Brendan Englot have designed algorithms that vastly improve such robots’ navigation and feature-detecting capabilities. Using the group’s algorithms, the robot is able to swim around a ship’s hull and view complex structures such as propellers and shafts. The goal is to achieve a resolution fine enough to detect a 10-centimeter mine attached to the side of a ship.

“A mine this small may not sink the vessel or cause loss of life, but if it bends the shaft, or damages the bearing, you still have a big problem,” Hover says. “The ability to ensure that the bottom of the boat doesn’t have a mine attached to it is really critical to vessel security today.”

Hover and his colleagues have detailed their approach in a paper to appear in theInternational Journal of Robotics Research.

Why platinum is the wrong material for fuel cell?

Engineerblogger
July 19, 2012



Professor Alfred Anderson

Fuel cells are inefficient because the catalyst most commonly used to convert chemical energy to electricity is made of the wrong material, a researcher at Case Western Reserve University argues. Rather than continue the futile effort to tweak that material—platinum—to make it work better, Chemistry Professor Alfred Anderson urges his colleagues to start anew.

“Using platinum is like putting a resistor in the system,” he said. Anderson freely acknowledges he doesn’t know what the right material is, but he’s confident researchers’ energy would be better spent seeking it out than persisting with platinum.

“If we can find a catalyst that will do this [more efficiently],” he said, “it would reach closer to the limiting potential and get more energy out of the fuel cell.”

Anderson’s analysis and a guide for a better catalyst have been published in a recent issue of Physical Chemistry Chemical Physics and in Electrocatalysis online.

Even in the best of circumstances, Anderson explained, the chemical reaction that produces energy in a fuel cell—like those being tested by some car companies—ends up wasting a quarter of the energy that could be transformed into electricity. This point is well recognized in the scientific community, but, to date, efforts to address the problem have proved fruitless.

Anderson blames the failure on a fundamental misconception as to the reason for the energy waste. The most widely accepted theory says impurities are binding to the platinum surface of the cathode and blocking the desired reaction.

“The decades-old surface-poisoning explanation is lame because there is more to the story,” Anderson said.

To understand the loss of energy, Anderson used data derived from oxygen-reduction experiments to calculate the optimal bonding strengths between platinum and intermediate molecules formed during the oxygen-reduction reaction. The reaction takes place at the platinum-coated cathode.

He found the intermediate molecules bond too tightly or too loosely to the cathode surface, slowing the reaction and causing a drop in voltage. The result is the fuel cell produces about .93 volts instead of the potential maximum of 1.23 volts.

To eliminate the loss, calculations show, the catalyst should have bonding strengths tailored so that all reactions taking place during oxygen reduction occur at or as near to 1.23 volts as possible.

Anderson said the use of volcano plots, which are a statistical tool for comparing catalysts, has actually misguided the search for the best one. “They allow you to grade a series of similar catalysts, but they don’t point to better catalysts.”

He said a catalyst made of copper laccase, a material found in trees and fungi, has the desired bonding strength but lacks stability. Finding a catalyst that has both is the challenge.

Anderson is working with other researchers exploring alternative catalysts as well as an alternative reaction pathway in an effort to increase efficiency.


Source: Case Western Reserve University

Researchers Create Highly Conductive and Elastic Conductors Using Silver Nanowires

Engineerblogger
July 19, 2012


The silver nanowires can be printed to fabricate patterned stretchable conductors.


Researchers from North Carolina State University have developed highly conductive and elastic conductors made from silver nanoscale wires (nanowires). These elastic conductors can be used to develop stretchable electronic devices.

Stretchable circuitry would be able to do many things that its rigid counterpart cannot. For example, an electronic “skin” could help robots pick up delicate objects without breaking them, and stretchable displays and antennas could make cell phones and other electronic devices stretch and compress without affecting their performance. However, the first step toward making such applications possible is to produce conductors that are elastic and able to effectively and reliably transmit electric signals regardless of whether they are deformed.

Dr. Yong Zhu, an assistant professor of mechanical and aerospace engineering at NC State, and Feng Xu, a Ph.D. student in Zhu’s lab have developed such elastic conductors using silver nanowires.

Silver has very high electric conductivity, meaning that it can transfer electricity efficiently. And the new technique developed at NC State embeds highly conductive silver nanowires in a polymer that can withstand significant stretching without adversely affecting the material’s conductivity. This makes it attractive as a component for use in stretchable electronic devices.

“This development is very exciting because it could be immediately applied to a broad range of applications,” Zhu said. “In addition, our work focuses on high and stable conductivity under a large degree of deformation, complementary to most other work using silver nanowires that are more concerned with flexibility and transparency.”

“The fabrication approach is very simple,” says Xu. Silver nanowires are placed on a silicon plate. A liquid polymer is poured over the silicon substrate. The polymer is then exposed to high heat, which turns the polymer from a liquid into an elastic solid. Because the polymer flows around the silver nanowires when it is in liquid form, the nanowires are trapped in the polymer when it becomes solid. The polymer can then be peeled off the silicon plate.

“Also silver nanowires can be printed to fabricate patterned stretchable conductors,” Xu says. The fact that it is easy to make patterns using the silver nanowire conductors should facilitate the technique’s use in electronics manufacturing.

When the nanowire-embedded polymer is stretched and relaxed, the surface of the polymer containing nanowires buckles. The end result is that the composite is flat on the side that contains no nanowires, but wavy on the side that contains silver nanowires.

After the nanowire-embedded surface has buckled, the material can be stretched up to 50 percent of its elongation, or tensile strain, without affecting the conductivity of the silver nanowires. This is because the buckled shape of the material allows the nanowires to stay in a fixed position relative to each other, even as the polymer is being stretched.

“In addition to having high conductivity and a large stable strain range, the new stretchable conductors show excellent robustness under repeated mechanical loading,” Zhu says. Other reported stretchable conductive materials are typically deposited on top of substrates and could delaminate under repeated mechanical stretching or surface rubbing.

The paper, “Highly Conductive and Stretchable Silver Nanowire Conductors,” was published in Advanced Materials. The research was supported by the National Science Foundation.


Source:  North Carolina State University


Researchers develop “nanorobot” that can be programmed to target different diseases



Engineerblogger

July 19, 2012


University of Florida researchers have moved a step closer to treating diseases on a cellular level by creating a tiny particle that can be programmed to shut down the genetic production line that cranks out disease-related proteins.

In laboratory tests, these newly created “nanorobots” all but eradicated hepatitis C virus infection. The programmable nature of the particle makes it potentially useful against diseases such as cancer and other viral infections.

The research effort, led by Y. Charles Cao, a UF associate professor of chemistry, and Dr. Chen Liu, a professor of pathology and endowed chair in gastrointestinal and liver research in the UF College of Medicine, is described online this week in the Proceedings of the National Academy of Sciences.

“This is a novel technology that may have broad application because it can target essentially any gene we want,” Liu said. “This opens the door to new fields so we can test many other things. We’re excited about it.”

During the past five decades, nanoparticles — particles so small that tens of thousands of them can fit on the head of a pin — have emerged as a viable foundation for new ways to diagnose, monitor and treat disease. Nanoparticle-based technologies are already in use in medical settings, such as in genetic testing and for pinpointing genetic markers of disease. And several related therapies are at varying stages of clinical trial.

The Holy Grail of nanotherapy is an agent so exquisitely selective that it enters only diseased cells, targets only the specified disease process within those cells and leaves healthy cells unharmed.

To demonstrate how this can work, Cao and colleagues, with funding from the National Institutes of Health, the Office of Naval Research and the UF Research Opportunity Seed Fund, created and tested a particle that targets hepatitis C virus in the liver and prevents the virus from making copies of itself.

Hepatitis C infection causes liver inflammation, which can eventually lead to scarring and cirrhosis. The disease is transmitted via contact with infected blood, most commonly through injection drug use, needlestick injuries in medical settings, and birth to an infected mother. More than 3 million people in the United States are infected and about 17,000 new cases are diagnosed each year, according to the Centers for Disease Control and Prevention. Patients can go many years without symptoms, which can include nausea, fatigue and abdominal discomfort.

Current hepatitis C treatments involve the use of drugs that attack the replication machinery of the virus. But the therapies are only partially effective, on average helping less than 50 percent of patients, according to studies
published in The New England Journal of Medicine and other journals. Side effects vary widely from one medication to another, and can include flu-like symptoms, anemia and anxiety.

Cao and colleagues, including graduate student Soon Hye Yang and postdoctoral associates Zhongliang Wang, Hongyan Liu and Tie Wang, wanted to improve on the concept of interfering with the viral genetic material in a way that boosted therapy effectiveness and reduced side effects.

The particle they created can be tailored to match the genetic material of the desired target of attack, and to sneak into cells unnoticed by the body’s innate defense mechanisms.

Recognition of genetic material from potentially harmful sources is the basis of important treatments for a number of diseases, including cancer, that are linked to the production of detrimental proteins. It also has potential for use in detecting and destroying viruses used as bioweapons.

The new virus-destroyer, called a nanozyme, has a backbone of tiny gold particles and a surface with two main biological components. The first biological portion is a type of protein called an enzyme that can destroy the genetic recipe-carrier, called mRNA, for making the disease-related protein in question. The other component is a large molecule called a DNA oligonucleotide that recognizes the genetic material of the target to be destroyed and instructs its neighbor, the enzyme, to carry out the deed. By itself, the enzyme does not selectively attack hepatitis C, but the combo does the trick.

“They completely change their properties,” Cao said.

In laboratory tests, the treatment led to almost a 100 percent decrease in hepatitis C virus levels. In addition, it did not trigger the body’s defense mechanism, and that reduced the chance of side effects. Still, additional testing is needed to determine the safety of the approach.

Future therapies could potentially be in pill form.

“We can effectively stop hepatitis C infection if this technology can be further developed for clinical use,” said Liu, who is a member of The UF Shands Cancer Center.

The UF nanoparticle design takes inspiration from the Nobel prize-winning discovery of a process in the body in which one part of a two-component complex destroys the genetic instructions for manufacturing protein, and the other part serves to hold off the body’s immune system attacks. This complex controls many naturally occurring processes in the body, so drugs that imitate it have the potential to hijack the production of proteins needed for normal function. The UF-developed therapy tricks the body into accepting it as part of the normal processes, but does not interfere with those processes.

“They’ve developed a nanoparticle that mimics a complex biological machine — that’s quite a powerful thing,” said nanoparticle expert Dr. C. Shad Thaxton, an assistant professor of urology at the Feinberg School of Medicine at Northwestern University and co-founder of the biotechnology company AuraSense LLC, who was not involved in the UF study. “The promise of nanotechnology is extraordinary. It will have a real and significant impact on how we practice medicine.”

Source: University of Florida

The Artificial Finger: New ultracapacitor delivers a jolt of energy at a constant voltage

Engineerblogger
June 19, 2012



To touch something is to understand it – emotionally and cognitive. It´s one of our important six senses, which we use and need in our daily lives. But accidents or illnesses can disrupt us from our sense of touch

To touch something is to understand it – emotionally and cognitive. It´s one of our important six senses, which we use and need in our daily lives. But accidents or illnesses can disrupt us from our sense of touch.

Now European researchers of the projects NanoBioTact and NanoBioTouch delve deep into the mysteries of touch and have developed the first sensitive artificial finger.
The main scientific aims of the projects are to radically improve understanding of the human mechano-transduction system and tissue engineered nanobiosensors. Therefore an international and multi disciplinary team of 13 scientific institutes, universities and companies put their knowledge together. “There are many potential applications of biometric tactile sensoring, for example in prosthetic limbs where you´ve got neuro-coupling which allows the limb to sense objects and also to feed back to the brain, to control the limb. Another area would be in robotics where you might want the capability to have sense the grip of objects, or intelligent haptic exploration of surfaces for example”, says Prof. Michael Adams, the coordinator of NanoBioTact.

The scientists have already developed a prototype of the first sensitive artificial finger. It works with an array of pressure sensors that mimic the spatial resolution, sensitivity and dynamics of human neural tactile sensors and can be directly connected to the central nervous system. Combined with an artificial skin that mimics a human fingerprint, the device´s sensitivity to vibrations is improved. Depending on the quality of a textured surface, the biomimetic finger vibrates in different ways, when it slides across the surface. Thereby it produces different signals and once it will get used by patients, they could recognise if the surface is smooth or scratchy. “The sensors are working very much like the sensors are doing on your own finger”, says physicist Dr. Michael Ward from the School of Mechanical Engineering at the University of Birmingham.

Putting the biomimetic finger on artificial limbs would take prostheses to the next level. “Compared to the hand prostheses which are currently on the market, an integrated sense of touch would be a major improvement. It would be a truly modern and biometric device which would give the patient the feeling as if it belonged to his own body”, says Dr. Lucia Beccai from the Centre for Micro-Robotics at the Italian Institute for Technology. But till the artificial finger will be available on large scale a lot of tests will have to be done. Nevertheless with the combination of computer and cognitive sciences, nano- and biotechnology the projects NanoBioTact and NanoBioTouch have already brought us a big step closer to artificial limbs with sensitive fingers!

Source: Youris

Tuesday 10 July 2012

Keeping electric vehicle batteries cool

Engineerblogger
July 10, 2012





Heat can damage the batteries of electric vehicles – even just driving fast on the freeway in summer temperatures can overheat the battery. An innovative new coolant conducts heat away from the battery three times more effectively than water, keeping the battery temperature within an acceptable range even in extreme driving situations.


Batteries provide the “fuel” that drives electric cars – in effect, the vehicles’ lifeblood. If batteries are to have a long service life, overheating must be avoided. A battery’s “comfort zone” lies between 20°C and 35°C. But even a Sunday drive in the midday heat of summer can push a battery’s temperature well beyond that range. The damage caused can be serious: operating a battery at a temperature of 45°C instead of 35°C halves its service life. And batteries are expensive – a new one can cost as much as half the price of the entire vehicle. That is why it is so important to keep them cool. Thus far, conventional cooling systems have not reached their full potential: either the batteries are not cooled at all – which is the case with ones that are simply exchanged for a fully charged battery at the “service station” – or they are air cooled. But air can absorb only very little heat and is also a poor conductor of it. What’s more, air cooling requires big spaces between the battery’s cells to allow sufficient fresh air to circulate between them. Water-cooling systems are still in their infancy. Though their thermal capacity exceeds that of air-cooling systems and they are better at conducting away heat, their downside is the limited supply of water in the system compared with the essentially limitless amount of air that can flow through a battery.
More space under the hood

In future, another option will be available for keeping batteries cool – a coolant by the name of CryoSolplus. It is a dispersion that mixes water and paraffin along with stabilizing tensides and a dash of the anti-freeze agent glycol. The advantage is that CryoSolplus can absorb three times as much heat as water, and functions better as a buffer in extreme situations such as trips on the freeway at the height of summer. This means that the holding tank for the coolant can be much smaller than those of watercooling systems – saving both weight and space under the hood. In addition, CryoSolplus is good at conducting away heat, moving it very quickly from the battery cells into the coolant. With additional costs of just 50 to 100 euros, the new cooling system is only marginally more expensive than water cooling. The coolant was developed by researchers at the Fraunhofer Institute for Environmental, Safety and Energy Technology UMSICHT in Oberhausen.

As CryoSolplus absorbs heat, the solid paraffin droplets within it melt, storing the heat in the process. When the solution cools, the droplets revert to their solid form. Scientists call such substances phase change materials or PCMs. “The main problem we had to overcome during development was to make the dispersion stable,” explains Dipl.-Ing. Tobias Kappels, a scientist at UMSICHT. The individual solid droplets of paraffin had to be prevented from agglomerating or – as they are lighter than water – collecting on the surface of the dispersion. They need to be evenly distributed throughout the water. Tensides serve to stabilize the dispersion, depositing themselves on the paraffin droplets and forming a type of protective coating. “To find out which tensides are best suited to this purpose, we examined the dispersion in three different stress situations: How long can it be stored without deteriorating? How well does it withstand mechanical stresses such as being pumped through pipes? And how stable is it when exposed to thermal stresses, for instance when the paraffin particles freeze and then thaw again?” says Kappels. Other properties of the dispersion that the researchers are optimizing include its heat capacity, its ability to transfer heat and its flow capability. The scientists’ next task will be to carry out field tests, trying out the coolant in an experimental vehicle.




Source: Fraunhofer-Gesellschaft

Graphene Repairs Holes By Knitting Itself Back Together, Say Physicists

Engineerblogger
July 10, 2012




The graphene revolution is upon us. If the visionaries are to be believed, the next generation of more or less everything is going to be based on this wonder material--sensors, actuators, transistors and information processors and so on. There seems little that graphene can't do.

But there's one fly in the ointment. Nobody has yet worked out how to make graphene in large, reliable quantities or how to carve and grow it into the shapes necessary for the next generation of devices.

That's largely because it's tricky growing anything into a layer only a single atom thick. But for carbon, it's all the more difficult because of this element's affinity to other atoms, including itself. A carbon sheet will happily curl up and form a tube or a ball or some more exotic shape. It will also react with other atoms nearby, which prevents growth and can even tear graphene apart.

So a better understanding of the way a graphene sheet interacts with itself and its environment is crucial if physicists are ever going to tame this stuff.

Enter Konstantin Novoselov at the University of Manchester and a few pals who have spent more than a few hours staring at graphene sheets through an electron microscope to see how it behaves.

Today, these guys say they've discovered why graphene appears so unpredictable. It turns out that if you make a hole in graphene, the material automatically knits itself back together again.

Novoselov and co made their discovery by etching tiny holes into a graphene sheet using an electron beam and watching what happens next using an electron microscope. They also added a few atoms of palladium or nickel, which catalyse the dissociation of carbon bonds and bind to the edges of the holes making them stable.

They found that the size of the holes depended on the number of metal atoms they added--more metal atoms can stabilise bigger holes.

But here's the curious thing. If they also added extra carbon atoms to the mix, these displaced the the metal atoms and reknitted the holes back together again.

Novoselov and co say the structure of the repaired area depends on the form in which the carbon is available. So when available as a hydrocarbon, the repairs tend to contain non-hexagonal defects where foreign atoms have entered the structure.

But when the carbon is available in pure form, the repairs are perfect and form pristine graphene.

That's important because it immediately suggests a way to grow graphene into almost any shape using the careful injection of metal and carbon atoms.

But there are significant challenges ahead. One important question is how quickly these processes occur and whether they can be controlled with the precision and reliability necessary for device manufacture.

Novoselov is a world leader in this area and the joint recipient of the Nobel Prize for physics in 2010 for his early work on graphene. He and his team are well set up to solve this and various related questions.

But with the future of computing (and almost everything else) at stake, there's bound to be plenty of competitors snapping at their heels.

Source: Technology Review


Additional Information:

Smart Headlight System Will Have Drivers Seeing Through the Rain: Shining Light Between Drops Makes Thunderstorm Seem Like a Drizzle

Engineerblogger
July 10, 2012




Drivers can struggle to see when driving at night in a rainstorm or snowstorm, but a smart headlight system invented by researchers at Carnegie Mellon University's Robotics Institute can improve visibility by constantly redirecting light to shine between particles of precipitation.

The system, demonstrated in laboratory tests, prevents the distracting and sometimes dangerous glare that occurs when headlight beams are reflected by precipitation back toward the driver.

"If you're driving in a thunderstorm, the smart headlights will make it seem like it's a drizzle," said Srinivasa Narasimhan, associate professor of robotics.

The system uses a camera to track the motion of raindrops and snowflakes and then applies a computer algorithm to predict where those particles will be just a few milliseconds later. The light projection system then adjusts to deactivate light beams that would otherwise illuminate the particles in their predicted positions.

"A human eye will not be able to see that flicker of the headlights," Narasimhan said. "And because the precipitation particles aren't being illuminated, the driver won't see the rain or snow either."

To people, rain can appear as elongated streaks that seem to fill the air. To high-speed cameras, however, rain consists of sparsely spaced, discrete drops. That leaves plenty of space between the drops where light can be effectively distributed if the system can respond rapidly, Narasimhan said.

In their lab tests, Narasimhan and his research team demonstrated that their system could detect raindrops, predict their movement and adjust a light projector accordingly in 13 milliseconds. At low speeds, such a system could eliminate 70 to 80 percent of visible rain during a heavy storm, while losing only 5 or 6 percent of the light from the headlamp.

To operate at highway speeds and to work effectively in snow and hail, the system's response will need to be reduced to just a few milliseconds, Narasimhan said. The lab tests have demonstrated the feasibility of the system, however, and the researchers are confident that the speed of the system can be boosted.

The test apparatus, for instance, couples a camera with an off-the-shelf DLP projector. Road-worthy systems likely would be based on arrays of light-emitting diode (LED) light sources in which individual elements could be turned on or off, depending on the location of raindrops. New LED technology could make it possible to combine LED light sources with image sensors on a single chip, enabling high-speed operation at low cost.

Narasimhan's team is now engineering a more compact version of the smart headlight that in coming years could be installed in a car for road testing.

Though a smart headlight system will never be able to eliminate all precipitation from the driver's field of view, simply reducing the amount of reflection and distortion caused by precipitation can substantially improve visibility and reduce driver distraction. Another benefit is that the system also can detect oncoming cars and direct the headlight beams away from the eyes of those drivers, eliminating the need to shift from high to low beams.

"One good thing is that the system will not fail in a catastrophic way," Narasimhan said. "If it fails, it is just a normal headlight."

This research was sponsored by the Office of Naval Research, the National Science Foundation, the Samsung Advanced Institute of Technology and Intel Corp. Collaborators include Takeo Kanade, professor of computer science and robotics; Anthony Rowe, assistant research professor of electrical and computer engineering; Robert Tamburo, Robotics Institute project scientist; Peter Barnum, a former robotics Ph.D. student now with Texas Instruments; and Raoul de Charette, a visiting Ph.D. student from Mines ParisTech, France.

Source: Carnegie Mellon University

Nanodevice builds electricity from tiny pieces

Engineerblogger
July 10, 2012

Scanning electron microscope image of the electron pump.
The arrow shows the direction of electron pumping. The hole
in the middle of the electrical control gates where the
electrons are trapped is ~0.0001 mm across
.

A team of scientists at the National Physical Laboratory (NPL) and University of Cambridge has made a significant advance in using nano-devices to create accurate electrical currents. Electrical current is composed of billions and billions of tiny particles called electrons. They have developed an electron pump - a nano-device - which picks these electrons up one at a time and moves them across a barrier, creating a very well-defined electrical current.

The device drives electrical current by manipulating individual electrons, one-by-one at very high speed. This technique could replace the traditional definition of electrical current, the ampere, which relies on measurements of mechanical forces on current-carrying wires.

The key breakthrough came when scientists experimented with the exact shape of the voltage pulses that control the trapping and ejection of electrons. By changing the voltage slowly while trapping electrons, and then much more rapidly when ejecting them, it was possible to massively speed up the overall rate of pumping without compromising the accuracy.

By employing this technique, the team were able to pump almost a billion electrons per second, 300 times faster than the previous record for an accurate electron pump set at the National Institute of Standards and Technology (NIST) in the USA in 1996.

Although the resulting current of 150 picoamperes is small (ten billion times smaller than the current used when boiling a kettle), the team were able to measure the current with an accuracy of one part-per-million, confirming that the electron pump was accurate at this level. This result is a milestone in the precise, fast, manipulation of single electrons and an important step towards a re-definition of the unit ampere.

As reported in Nature Communications, the team used a nano-scale semiconductor device called a 'quantum dot' to pump electrons through a circuit. The quantum dot is a tiny electrostatic trap less than 0.0001 mm wide. The shape of the quantum dot is controlled by voltages applied to nearby electrodes.

The dot can be filled with electrons and then raised in energy. By a process known as 'back-tunneling', all but one of the electrons fall out of the quantum dot back into the source lead. Ideally, just one electron remains trapped in the dot, which is ejected into the output lead by tilting the trap. When this is repeated rapidly this gives a current determined solely by the repetition rate and the charge on each electron - a universal constant of nature and the same for all electrons.

The research makes significant steps towards redefining the ampere by developing the application of an electron pump which improves accuracy rates in primary electrical measurement.

Masaya Kataoka of the Quantum Detection Group at NPL explains:

"Our device is like a water pump in that it produces a flow by a cyclical action. The tricky part is making sure that exactly the same number of electronic charge is transported in each cycle.

The way that the electrons in our device behave is quite similar to water; if you try and scoop up a fixed volume of water, say in a cup or spoon, you have to move slowly otherwise you'll spill some. This is exactly what used to happen to our electrons if we went too fast."

Stephen Giblin also part of the Quantum Detection Group, added:

"For the last few years, we have worked on optimising the design of our device, but we made a huge leap forward when we fine-tuned the timing sequence. We've basically smashed the record for the largest accurate single-electron current by a factor of 300.

Although moving electrons one at a time is not new, we can do it much faster, and with very high reliability - a billion electrons per second, with an accuracy of less than one error in a million operations.

Using mechanical forces to define the ampere has made a lot of sense for the last 60 or so years, but now that we have the nanotechnology to control single electrons we can move on.

The technology might seem more complicated, but actually a quantum system of measurement is more elegant, because you are basing your system on fundamental constants of nature, rather than things which we know aren't really constant, like the mass of the standard kilogram."

Source:  National Physical Laboratory

Additional Information:

How do you turn 10 minutes of power into 200? Efficiency, efficiency, efficiency.

Engineerbloggger
July 10, 2012




DARPA seeks revolutionary advances in the efficiency of robotic actuation; fundamental research into biology, physics and electrical engineering could benefit all engineered, actuated systems

A robot that drives into an industrial disaster area and shuts off a valve leaking toxic steam might save lives. A robot that applies supervised autonomy to dexterously disarm a roadside bomb would keep humans out of harm’s way. A robot that carries hundreds of pounds of equipment over rocky or wooded terrain would increase the range warfighters can travel and the speed at which they move. But a robot that runs out of power after ten to twenty minutes of operation is limited in its utility. In fact, use of robots in defense missions is currently constrained in part by power supply issues. DARPA has created the M3 Actuation program, with the goal of achieving a 2,000 percent increase in the efficiency of power transmission and application in robots, to improve performance potential.

Humans and animals have evolved to consume energy very efficiently for movement. Bones, muscles and tendons work together for propulsion using as little energy as possible. If robotic actuation can be made to approach the efficiency of human and animal actuation, the range of practical robotic applications will greatly increase and robot design will be less limited by power plant considerations.

M3 Actuation is an effort within DARPA’s Maximum Mobility and Manipulation (M3) robotics program, and adds a new dimension to DARPA’s suite of robotics research and development work.

“By exploring multiple aspects of robot design, capabilities, control and production, we hope to converge on an adaptable core of robot technologies that can be applied across mission areas,” said Gill Pratt, DARPA program manager. “Success in the M3 Actuation effort would benefit not just robotics programs, but all engineered, actuated systems, including advanced prosthetic limbs.”

Proposals are sought in response to a Broad Agency Announcement (BAA). DARPA expects that solutions will require input from a broad array of scientific and engineering specialties to understand, develop and apply actuation mechanisms inspired in part by humans and animals. Technical areas of interest include, but are not limited to: low-loss power modulation, variable recruitment of parallel transducer elements, high-bandwidth variable impedance matching, adaptive inertial and gravitational load cancellation, and high-efficiency power transmission between joints.

Research and development will cover two tracks of work:
  • Track 1 asks performer teams to develop and demonstrate high-efficiency actuation technology that will allow robots similar to the DARPA Robotics Challenge (DRC) Government Furnished Equipment (GFE) platform to have twenty times longer endurance than the DRC GFE when running on untethered battery power (currently only 10-20 minutes). Using Government Furnished Information about the GFE, M3 Actuation performers will have to build a robot that incorporates the new actuation technology. These robots will be demonstrated at, but not compete in, the second DRC live competition scheduled for December 2014.
  • Track 2 will be tailored to performers who want to explore ways of improving the efficiency of actuators, but at scales both larger and smaller than applicable to the DRC GFE platform, and at technical readiness levels insufficient for incorporation into a platform during this program. Essentially, Track 2 seeks to advance the science and engineering behind actuation without the requirement to apply it at this point.

While separate efforts, M3 Actuation will run in parallel with the DRC. In both programs DARPA seeks to develop the enabling technologies required for expanded practical use of robots in defense missions. Thus, performers on M3 Actuation will share their design approaches at the first DRC live competition scheduled for December 2013, and demonstrate their final systems at the second DRC live competition scheduled for December 2014.

Source: DARPA

Hypersonic - The new stealth

Engineerblogger
July 10, 2012


Credit: DARPA


DARPA’s research and development in stealth technology during the 1970s and 1980s led to the world’s most advanced radar-evading aircraft, providing strategic national security advantage to the United States. Today, that strategic advantage is threatened as other nations’ abilities in stealth and counter-stealth improve. Restoring that battle space advantage requires advanced speed, reach and range. Hypersonic technologies have the potential to provide the dominance once afforded by stealth to support a range of varied future national security missions.

Extreme hypersonic flight at Mach 20 (i.e., 20 times the speed of sound)—which would enable DoD to get anywhere in the world in under an hour—is an area of research where significant scientific advancements have eluded researchers for decades. Thanks to programs by DARPA, the Army, and the Air Force in recent years, however, more information has been obtained about this challenging subject.

“DoD’s hypersonic technology efforts have made significant advancements in our technical understanding of several critical areas including aerodynamics; aerothermal effects; and guidance, navigation and control,” said Acting DARPA Director, Kaigham J. Gabriel. “but additional unknowns exist.”

Tackling remaining unknowns for DoD hypersonics efforts is the focus of the new DARPA Integrated Hypersonics (IH) program. “History is rife with examples of different designs for ‘flying vehicles’ and approaches to the traditional commercial flight we all take for granted today,” explained Gabriel. “For an entirely new type of flight—extreme hypersonic—diverse solutions, approaches and perspectives informed by the knowledge gained from DoD’s previous efforts are critical to achieving our goals.”

To encourage this diversity, DARPA will host a Proposers’ Day on August 14, 2012, to detail the technical areas for which proposals are sought through an upcoming competitive broad agency announcement.

“We do not yet have a complete hypersonic system solution,” said Gregory Hulcher, director of Strategic Warfare, Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. “Programs like Integrated Hypersonics will leverage previous investments in this field and continue to reduce risk, inform development, and advance capabilities.”

The IH program expands hypersonic technology research to include five primary technical areas: thermal protection system and hot structures; aerodynamics; guidance, navigation, and control (GNC); range/instrumentation; and propulsion.

At Mach 20, vehicles flying inside the atmosphere experience intense heat, exceeding 3,500 degrees Fahrenheit, which is hotter than a blast furnace capable of melting steel, as well as extreme pressure on the aeroshell. The thermal protection materials and hot structures technology area aims to advance understanding of high-temperature material characteristics to withstand both high thermal and structural loads. Another goal is to optimize structural designs and manufacturing processes to enable faster production of high-mach aeroshells.

The aerodynamics technology area focuses on future vehicle designs for different missions and addresses the effects of adding vertical and horizontal stabilizers or other control surfaces for enhanced aero-control of the vehicle. Aerodynamics seeks technology solutions to ensure the vehicle effectively manages energy to be able to glide to its destination. Desired technical advances in the GNC technology area include advances in software to enable the vehicle to make real-time, in-flight adjustments to changing parameters, such as high-altitude wind gusts, to stay on an optimal flight trajectory.

The range/instrumentation area seeks advanced technologies to embed data measurement sensors into the structure that can withstand the thermal and structural loads to provide real-time thermal and structural parameters, such as temperature, heat transfer, and how the aeroshell skin recedes due to heat. Embedding instrumentation that can provide real-time air data measurements on the vehicle during flight is also desired. Unlike subsonic aircraft that have external probes measuring air density, temperature and pressure of surrounding air, vehicles traveling Mach 20 can’t take external probe measurements. Vehicle concepts that make use of new collection and measurement assets are also being sought.

The propulsion technology area is developing a single, integrated launch vehicle designed to precisely insert a hypersonic glide vehicle into its desired trajectory, rather than adapting a booster designed for space missions. The propulsion area also addresses integrated rocket propulsion technology onboard vehicles to enable a vehicle to give itself an in-flight rocket boost to extend its glide range.

“By broadening the scope of research and engaging a larger community in our efforts, we have the opportunity to usher in a new area of flight more rapidly and, in doing so, develop a new national security capability far beyond previous initiatives,” explained Air Force Maj. Christopher Schulz, DARPA program manager, who holds a doctorate in aerospace engineering.

The IH program is designed to address technical challenges and improve understanding of long-range hypersonic flight through an initial full-scale baseline test of an existing hypersonic test vehicle, followed by a series of subscale flight tests, innovative ground-based testing, expanded modeling and simulation, and advanced analytic methods, culminating in a test flight of a full-scale hypersonic X-plane (HX) in 2016. HX is envisioned as a recoverable next-generation configuration augmented with a rocket-based propulsion capability that will enable and reduce risk for highly maneuverable, long-range hypersonic platforms.

More information regarding the August 14 Proposers’ Day is available here.

New chip captures power from multiple sources

Engineerblogger
July 10, 2012


Graphic: Christine Daniloff

Researchers at MIT have taken a significant step toward battery-free monitoring systems — which could ultimately be used in biomedical devices, environmental sensors in remote locations and gauges in hard-to-reach spots, among other applications.

Previous work from the lab of MIT professor Anantha Chandrakasan has focused on the development of computer and wireless-communication chips that can operate at extremely low power levels, and on a variety of devices that can harness power from natural light, heat and vibrations in the environment. The latest development, carried out with doctoral student Saurav Bandyopadhyay, is a chip that could harness all three of these ambient power sources at once, optimizing power delivery.

The energy-combining circuit is described in a paper being published this summer in the IEEE Journal of Solid-State Circuits.

“Energy harvesting is becoming a reality,” says Chandrakasan, the Keithley Professor of Electrical Engineering and head of MIT’s Department of Electrical Engineering and Computer Science. Low-power chips that can collect data and relay it to a central facility are under development, as are systems to harness power from environmental sources. But the new design achieves efficient use of multiple power sources in a single device, a big advantage since many of these sources are intermittent and unpredictable.

“The key here is the circuit that efficiently combines many sources of energy into one,” Chandrakasan says. The individual devices needed to harness these tiny sources of energy — such as the difference between body temperature and outside air, or the motions and vibrations of anything from a person walking to a bridge vibrating as traffic passes over it — have already been developed, many of them in Chandrakasan’s lab.

Combining the power from these variable sources requires a sophisticated control system, Bandyopadhyay explains: Typically each energy source requires its own control circuit to meet its specific requirements. For example, circuits to harvest thermal differences typically produce only 0.02 to 0.15 volts, while low-power photovoltaic cells can generate 0.2 to 0.7 volts and vibration-harvesting systems can produce up to 5 volts. Coordinating these disparate sources of energy in real time to produce a constant output is a tricky process.

So far, most efforts to harness multiple energy sources have simply switched among them, taking advantage of whichever one is generating the most energy at a given moment, Bandyopadhyay says, but that can waste the energy being delivered by the other sources. “Instead of that, we extract power from all the sources,” he says. The approach combines energy from multiple sources by switching rapidly between them.

Another challenge for the researchers was to minimize the power consumed by the control circuit itself, to leave as much as possible for the actual devices it’s powering — such as sensors to monitor heartbeat, blood sugar, or the stresses on a bridge or a pipeline. The control circuits optimize the amount of energy extracted from each source.

The system uses an innovative dual-path architecture. Typically, power sources would be used to charge up a storage device, such as a battery or a supercapacitor, which would then power an actual sensor or other circuit. But in this control system, the sensor can either be powered from a storage device or directly from the source, bypassing the storage system altogether. “That makes it more efficient,” Bandyopadhyay says. The chip uses a single time-shared inductor, a crucial component to support the multiple converters needed in this design, rather than separate ones for each source.

David Freeman, chief technologist for power-supply solutions at Texas Instruments, who was not involved in this work, says, “The work being done at MIT is very important to enabling energy harvesting in various environments. The ability to extract energy from multiple different sources helps maximize the power for more functionality from systems like wireless sensor nodes.”

Only recently, Freeman says, have companies such as Texas Instruments developed very low-power micro-controllers and wireless transceivers that could be powered by such sources. “With innovations like these that combine multiple sources of energy, these systems can now start to increase functionality,” he says. “The benefits from operating from multiple sources not only include maximizing peak energy, but also help when only one source of energy may be available.”

The work has been funded by the Interconnect Focus Center, a combined program of the Defense Advanced Research Projects Agency and companies in the defense and semiconductor industries.

Source: MIT

Additional Information:

University to play key role in European solar energy technology project

Engineerblogger
July 10, 2012


Professor Kwang-Leong Choy


The University of Nottingham has joined a 10 million euro project to develop cost effective, solar generated electricity.

Photovoltaic (PV) electricity generation, converts solar radiation into electricity using solar cell panels. At the moment, producing silicon solar cells involves the use of complicated equipment such as vacuum processes, high temperatures and clean rooms, which makes the cost of energy generated in this way expensive.

Establishing a way to fabricate cost-effective high efficiency solar cells has long been of interest to both academics and industry. The Novel Nanostructured Thin/Thick Film Processing Group, which is based at the University, will be working on the project, entitled “SCALENANO” to develop cost-effective photovoltaic devices and modules based on advanced thin film technologies.

SCALENANO, which is part of the European FP-7 project, runs until 2015, and involves 13 European partners from research institutes, universities and companies, who all have an interest in the development of PV technologies.

Speaking about the project, Professor Kwang-Leong Choy, who is leading the research group at The University of Nottingham, said: “As the global supply of fossil fuels declines, the ability to generate sustainable energy will become absolutely vital. Generating electricity by converting solar radiation into electricity, potentially provides us with an unlimited source of energy.

“At the moment, the production of silicon solar cells involves complicated equipment, vacuum processes and clean rooms which makes the cost of PV cells very expensive. By working together with academic and industrial partners across Europe, we are confident that we will be able to find a way of fabricating cost-effective, high efficiency solar cells, which will benefit businesses and households across the world.”
Groundbreaking achievements There are issues with the thin film solar cells currently commercialised at the moment, due to challenges with depositing the materials on the cells over a large area, and also the limited supply of Indium, which is used in the production process.

Professor Choy and her group at The University of Nottingham will build on groundbreaking achievements they have already made in the area of thin film solar cell technologies, and will focus both on solving the problem of uniformity and the application of alternatives to Indium to develop high performance and sustainable solar cells.

Speaking about the SCALENANO project, Mike Carr, The University of Nottingham’s Director of Business Engagement, said: “The work that Professor Choy and her team are doing in photovoltaic technology is a great example of how innovations developed by researchers at The University of Nottingham can have potentially enormous benefits in industry. We always welcome the opportunity to meet with businesses who are interested in exploring ways in which we can work together to commercialise ideas and launch new products onto the market.”

Source: University of Nottingham

Will a carbon fiber supply crunch emerge?

Engineerblogger
July 10, 2012


Carbon Fibre Frame Credit: silovu.mysecondarydns.com

Forecasts for carbon fiber (CF) usage in 2020 vary widely from 136,000 tonnes according to CF supplier SGL Group (Wiesbaden, Germany) to 342,000 tonnes in a best-case scenario painted by Professor Andrew Walker of Director of the Northwest Composites Center at the University of Manchester during the recent JEC Asia Carbon Fiber Forum in Singapore. If the most optimistic scenario is to emerge, CF suppliers will need to get a hurry-on if they have any hope of matching this booming demand, otherwise growth may be restrained by lack of supply.

Through to 2015, around 30,000 tonnes/year of small tow PAN-based CF capacity is due to be added and based on a coefficient of 0.7 to take into account actial plant operational conditions, this translates to just over 67,000 tonnes of output according to Frank Glowacz, technical editor at JEC Composites (Paris, France). Large tow capacity, meanwhile, will almost double from 22,150 tonnes/year capacity in 2011 to 43,200 tonnes/year in 2015, with output of close to 39,000 tonnes.

This 106,000 tonnes/year of output capability seems more than enough to cater to demand for a conservative growth path, given 2015 demand may only be of the order of 47,000 tonnes, but there would be supply concerns if growth tracks the optimistic route given it takes around two years to construct and start up a plant. "If we want to grow the market we need to ensure supply is there and in my view the solution is new entrants," says Walker. "The existing players, particularly the Japanese, are too conservative, and only respond to confirmed projects when adding capacity," says Walker. The share of Japanese suppliers of global CF capacity will decline from 59% in 2011 to 50% in 2015 for small tow PAN-based fibers.

New suppliers are thankfully emerging in the industry such as Alabuga-fiber in the Russian Republic of Tatarstan, which expects to be onstream in 2014, and Kemrock (Vadodara) in India, which started production of industrial CF grades in 2011. The Middle East is also a potential source of CF given its low energy costs. In fact, Saudi Basic Industries Corporation (SABIC, Riyadh) has licensed technology from Montefibre SpA (Milan, Italy) for the production of CF with the initial intent being to construct a 3000-tonnes/yr CF plant at Montefibre's existing acrylic fiber production site in Spain, and in the longer term replicate this effort in Saudi Arabia. JEC's Glowacz also notes that Qatar has high potential for CF production. Previously, a plan to set up a carbon fiber composites facility for auto components as a joint venture of Qatar Automotive Gateway (Doha) and UK company Prodrive (Oxfordshire) had been announced.

Source: Plastic Today

Researchers devise scalable method for fabricating high-quality graphene transistors

Engineerblogger
July 10, 2012


Self-aligned graphene transistor

Graphene, a one-atom-thick layer of graphitic carbon, has attracted a great deal of attention for its potential use as a transistor that could make consumer electronic devices faster and smaller.
 
But the material's unique properties, and the shrinking scale of electronics, also make graphene difficult to fabricate on a large scale. The production of high-performance graphene using conventional fabrication techniques often leads to damage to the graphene lattice's shape and performance, resulting in problems that include parasitic capacitance and serial resistance.
 
Now, researchers from the California NanoSystems Institute at UCLA, the UCLA Department of Chemistry and Biochemistry, and the department of materials science and engineering at the UCLA Henry Samueli School of Engineering and Applied Science have developed a successful, scalable method for fabricating self-aligned graphene transistors with transferred gate stacks. 
 
By performing the conventional lithography, deposition and etching steps on a sacrificial substrate before integrating with large-area graphene through a physical transferring process, the new approach addresses and overcomes the challenges of conventional fabrication. With a damage-free transfer process and a self-aligned device structure, this method has enabled self-aligned graphene transistors with the highest cutoff frequency to date — greater than 400 GHz.
 
IMPACT:
The research demonstrates a unique, scalable pathway to high-speed, self-aligned graphene transistors and holds significant promise for the future application of graphene-based devices in ultrahigh-frequency circuits.
 
AUTHORS:
Authors of the research include UCLA chemistry postdoctoral scholars Lei Liao and Hailong Zhou; UCLA chemistry graduate students Lixin Liu and Shan Jiang; UCLA materials science and engineering graduate students Rui Cheng, Yu Chen, YungChen Lin and Jinwei Bai (now a research scientist at IBM); UCLA associate professor of materials science and engineering Yu Huang; and UCLA associate professor of chemistry and biochemistry Xiangfeng Duan.
 
Professors Huang and Duan are also members of the California NanoSystems Institute at UCLA.
 
FUNDING:
The research was supported by the National Science Foundation, the National Institutes of Health and the U.S. Office of Naval Research.
 
JOURNAL: 
The research was published in the July 2 issue of Proceedings of the National Academy of Sciences and is available online at http://bit.ly/N8rM7o.
 
Source: UCLA

Tuesday 3 July 2012

Research paves the way for accurate manufacturing of complex parts for aerospace and car industries

Engineerblogger
July 3, 2012


A complex SLM part

Producing strong, lightweight and complex parts for car manufacturing and the aerospace industry is set to become cheaper and more accurate thanks to a new technique developed by engineers from the University of Exeter. The research team has developed a new method for making three-dimensional aluminium composite parts by mixing a combination of relatively inexpensive powders.

Combining these elements causes a reaction which results in the production of particles that are 600 times smaller than the width of a human hair. Around 100 nanometres in size, the reaction uniformly distributes them through the material, making it very strong.

The process is based on the emerging technique of Selective Laser Manufacturing (SLM), in which laser manufactures complicated parts from metal powders, at the University’s Centre for Additive Layer Manufacturing. The new technique has the potential to manufacture aluminium composite parts as pistons, drive shafts, suspension components, brake discs and almost any structural components of cars or aeroplanes. It also enables the production of lighter structural designs with innovative geometries leading to further reduce of the weight of products.

The team’s latest research findings are published in the Journal of Alloys and Compounds.

Parts for cars and aeroplanes are widely made from aluminium, which is relatively light, with other reinforcement particles to make it stronger. The traditional methods, generally involved casting and mechanical alloying, can be inaccurate and expensive, especially when the part has a complex shape. Over the last decade, new SLM techniques have been developed, which enable parts with more complicated shapes to be produced. The new SLM techniques can be applied to manufacture aluminium composite parts from specific powder mixtures.

To carry out this new technique, the researchers use a laser to melt a mixture of powders, composed of aluminium and a reactive reinforcing material for example an iron oxide combination. A reaction between the powders results in the formation of new particles, which act as reinforcements and distribute evenly throughout the composite material.

This method allows parts with complex shapes to be easily produced. The new materials have very fine particles compared with other composites, making them more robust. The reaction between constituents releases energy, which also means materials can be produced at a higher rate using less power. This technique is significantly cheaper and more sustainable than other SLM methods which directly blend very fine powders to manufacture composites.

University of Exeter PhD student Sasan Dadbakhsh of the College of Engineering, Mathematics and Physical Sciences said: “This new development has great potential to make high performance parts for car manufacturing, the aerospace industry and potentially other industries. Additive layer manufacturing technologies are becoming increasingly accessible so this method could become a viable approach for manufacturing."

Dr Liang Hao of the University of Exeter added: “This advancement allows the rapid development of sustainable lightweight composite components. This particularly helps to save a considerable amount of material, energy and cost for the production of one-off or small volume products.”

The Centre for Additive Layer Manufacturing (CALM) is a £2.6 million investment in innovative manufacturing for the benefit of businesses in the South West and across the rest of the UK. CALM is delivered in collaboration with EADS UK Ltd.

Source:  University of Exeter

Additional Information:

Researcher offers new insights into power-generating windows

Engineerblogger
July 2, 2012


(beeld: Eric Verdult, Kennis in Beeld)

On 5 July Jan Willem Wiegman is graduating from TU Delft with his research into power-generating windows. The Applied Physics Master’s student calculated how much electricity can be generated using so-called luminescent solar concentrators. These are windows which have been fitted with a thin film of material that absorbs sunlight and directs it to narrow solar cells at the perimeter of the window. Wiegman shows the relationship between the colour of the material used and the maximum amount of power that can be generated. Such power-generating windows offer potential as a cheap source of solar energy. Wiegman’s research article, which he wrote together with his supervisor at TU Delft, Erik van der Kolk, has been published in the journal Solar Energy Materials and Solar Cells("Building integrated thin film luminescent solar concentrators: Detailed efficiency characterization and light transport modelling").

Windows and glazed facades of office blocks and houses can be used to generate electricity if they are used as luminescent solar concentrators. This entails applying a thin layer (for example a foil or coating) of luminescent material to the windows, with narrow solar cells at the perimeters. The luminescent layer absorbs sunlight and guides it to the solar cells at the perimeter, where it is converted into electricity. This enables a large surface area of sunlight to be concentrated on a narrow strip of solar cells.

The new stained glass

Luminescent solar concentrators are capable of generating dozens of watts per square metre. The exact amount of power produced by the windows depends on the colour and quality of the light-emitting layer and the performance of the solar cells. Wiegman’s research shows for the first time the relationship between the colour of the film or coating and the maximum amount of power.

A transparent film produces a maximum of 20 watts per square metre, which is an efficiency of 2%. To power your computer you would need a window measuring 4 square metres. The efficiency increases if the film is able to absorb more light particles. This can be achieved by using a foil that absorbs light particles from a certain part of the solar spectrum. A foil that mainly absorbs the blue, violet and green light particles will give the window a red colour. Another option is to use a foil that absorbs all the colours of the solar spectrum equally. This would give the window a grey tint. Both the red and the grey film have an efficiency of 9%, which is comparable to the efficiency of flexible solar cells.

Wiegman’s research has also shown the importance of a smooth film surface for the efficient transport of light particles to the perimeter of the window as they are then not impeded by scattering between the film and the window surface.

The research into power-generating windows is in keeping with the European ambition to make buildings as energy neutral as possible. Luminescent solar concentrators are a good way of producing cheap solar energy.

Source: TU Delft

Additional Information:

  • Visit the research website for more information about research into luminescent materials

Lightening the load: new materials for automotive

The Engineer
July 2, 2012
In the automotive sector, mineral fillers such as glass fibre are being replaced by materials such as hemp

Steel could one day be replaced as the material of choice for high-volume auto manufacture, but installed plant and entrenched manufacturing processes make the transition difficult

We’re in a brave new world of engineering innovation, with new inventions and developments enriching our lives every day. Yet some aspects of the devices we depend on have changed little from their inception. It might seem like a contradiction, but sometimes even the most innovative sectors find there are barriers to innovation.

Take, for example, the most visible example of the way technology changed our lives in the last century: the motor car. In many ways, the cars on the roads today are unrecognisable from the contraptions and the early fruits of mass production that trundled down the roads of the 1910s and 1920s. But in others, they have changed very little.

‘People have the perception that cars are basically steel boxes with glass windows, and there’s a good reason for that perception,’ said Prof Richard Dashwood, head of materials and sustainability at the Warwick Manufacturing Group (WMG) and chief technology officer of the new High Value Manufacturing Catapult centre. ‘It is because, largely, they are. Something like 99.9 per cent of all cars on the road are steel-intensive vehicles.’

But the issue of ‘lightweighting’ — reducing the mass of the vehicle — is very much on the minds of automotive manufacturers at the moment. ‘It’s driven by European legislation on CO2 emissions,’ Dashwood said. ‘While you can improve your powertrain and aerodynamics, it’s lightweighting that will give you the biggest CO2 improvement.’

So why, considering the many advances in materials that have taken place over the last century and which have been adopted so enthusiastically by, for example, the aerospace sector, is the automotive industry still so wedded to its original materials?

There are exceptions to this rule. Among the most notable is Jaguar Land Rover, which switched to all-aluminium bodies in 2009, after Jaguar led the way with aluminium construction with the XJ and XK models. Aluminium is, of course, lighter than steel with comparable strength. ‘We didn’t decide to use aluminium because it was new or different,’ said Jaguar’s chief technical specialist for body engineering, Mark White. ‘It is because aluminium delivers significant benefits for drivers.’

Research targets next-generation electric motors for luxury automobiles

Engineerblogger
July 2, 2012




Cobham, Jaguar Land Rover and Ricardo will carry out research into the design of economic electric motors that avoid expensive magnet materials.

Next-generation electric motors for low carbon emission vehicles are the target of a new collaborative research programme to be led by Cobham Technical Services. The project, ‘Rapid Design and Development of a Switched Reluctance Traction Motor’, will also involve partners Jaguar Land-Rover and engineering consultancy Ricardo UK, and is co-funded by the Technology Strategy Board.

As part of its work in the project, Cobham will develop multi-physics software and capture the other partners’ methodology in order to design, simulate and analyze the performance of high efficiency, lightweight electric traction motors that eliminate the use of expensive magnetic materials. Using these new software tools JLR and Ricardo will design and manufacture a prototype switched reluctance motor that addresses the requirements of luxury hybrid vehicles.

The project is one of 16 collaborative R&D programmes to have won funding from the UK government-backed Technology Strategy Board and the Department for Business, Innovation and Skills (BIS), which have agreed to invest £10 million aimed at achieving significant cuts in CO2 emissions for vehicle-centric technologies. The total value of this particular motor project is £1.5 million, with half the amount funded by the Technology Strategy Board/BIS, and the rest by the project partners.

According to Kevin Ward, Director of Cobham Technical Services - Vector Fields Software, “Design software for switched reluctance motors is at about the same level as diesel engine design software when it was first introduced. Cobham will develop its existing SRM capabilities to provide the consortium with enhanced tools based on the widely used Opera suite for design, finite element simulation and analysis. In addition to expanding various facets of Opera’s electromagnetic capabilities, we will investigate advanced integration with our other multi-physics software, to obtain more accurate evaluation of model related performance parameters such as vibration. Design throughput will also be enhanced via more extensive parallelization of code and developing an environment which captures the workflow of the design process.”

Tony Harper, Jaguar Land Rover Head of Research: “It is important to understand the capability of switched reluctance motors in the context of the vehicle as a whole so that we can set component targets that will deliver the overall vehicle experience. Jaguar Land Rover will apply its expertise in designing and producing world class vehicles to this project, with the aim of developing the tools and technology for the next generation of electric motors.”

Dr Andrew Atkins, chief engineer – innovation, at Ricardo UK, said: “The development of technologies enabling the design of electric vehicle motors that avoid the use of expensive and potentially carbon-intensive rare-earth metals, is a major focus for the auto industry. Ricardo is pleased to be involved in this innovative programme and we look forward to working with Cobham and Jaguar Land Rover to develop this important new technology. This will further build upon our growth plans for electric drives capability and capacity.”

The project has a three year timetable, at the end of which improved design tools and processes will be in place to support rapid design, helping to accelerate the uptake of this technology into production. Aside from the need to further reduce CO2 emissions from hybrid vehicles by moving to more efficient and lower weight electric motors, there is an urgent requirement to eliminate the use of rare earth elements, which are in increasingly short supply and have risen ten-fold in cost in recent years. Virtually all electric traction motors currently used in such applications employ permanent magnets made from materials such as neodymium-iron-boron and samarium-cobalt. Since switched reluctance motors do not use permanent magnets, they are likely to provide the ideal replacement technology. However, one of the main challenges of the project will be to produce a torque-dense motor that is also quiet enough for use in luxury vehicles.

Source: Ricardo

Additional Information:

Researchers develop paintable battery onto most surfaces

Engineerblogger
July 2, 2012

An electron microscope image of a spray-painted lithium-ion battery developed at Rice University shows its five-layer structure. (Credit: Ajayan Lab/Rice University)

Researchers at Rice University have developed a lithium-ion battery that can be painted on virtually any surface.

The rechargeable battery created in the lab of Rice materials scientist Pulickel Ajayan consists of spray-painted layers, each representing the components in a traditional battery. The research appears in Nature’s online, open-access journal Scientific Reports.

“This means traditional packaging for batteries has given way to a much more flexible approach that allows all kinds of new design and integration possibilities for storage devices,” said Ajayan, Rice’s Benjamin M. and Mary Greenwood Anderson Professor in Mechanical Engineering and Materials Science and of chemistry. “There has been lot of interest in recent times in creating power sources with an improved form factor, and this is a big step forward in that direction.”

Lead author Neelam Singh, a Rice graduate student, and her team spent painstaking hours formulating, mixing and testing paints for each of the five layered components – two current collectors, a cathode, an anode and a polymer separator in the middle.

The materials were airbrushed onto ceramic bathroom tiles, flexible polymers, glass, stainless steel and even a beer stein to see how well they would bond with each substrate.

In the first experiment, nine bathroom tile-based batteries were connected in parallel. One was topped with a solar cell that converted power from a white laboratory light. When fully charged by both the solar panel and house current, the batteries alone powered a set of light-emitting diodes that spelled out “RICE” for six hours; the batteries provided a steady 2.4 volts.

The researchers reported that the hand-painted batteries were remarkably consistent in their capacities, within plus or minus 10 percent of the target. They were also put through 60 charge-discharge cycles with only a very small drop in capacity, Singh said.

Each layer is an optimized stew. The first, the positive current collector, is a mixture of purified single-wall carbon nanotubes with carbon black particles dispersed in N-methylpyrrolidone. The second is the cathode, which contains lithium cobalt oxide, carbon and ultrafine graphite (UFG) powder in a binder solution. The third is the polymer separator paint of Kynar Flex resin, PMMA and silicon dioxide dispersed in a solvent mixture. The fourth, the anode, is a mixture of lithium titanium oxide and UFG in a binder, and the final layer is the negative current collector, a commercially available conductive copper paint, diluted with ethanol.

“The hardest part was achieving mechanical stability, and the separator played a critical role,” Singh said. “We found that the nanotube and the cathode layers were sticking very well, but if the separator was not mechanically stable, they would peel off the substrate. Adding PMMA gave the right adhesion to the separator.” Once painted, the tiles and other items were infused with the electrolyte and then heat-sealed and charged.

Singh said the batteries were easily charged with a small solar cell. She foresees the possibility of integrating paintable batteries with recently reported paintable solar cells to create an energy-harvesting combination that would be hard to beat. As good as the hand-painted batteries are, she said, scaling up with modern methods will improve them by leaps and bounds. “Spray painting is already an industrial process, so it would be very easy to incorporate this into industry,” Singh said.

The Rice researchers have filed for a patent on the technique, which they will continue to refine. Singh said they are actively looking for electrolytes that would make it easier to create painted batteries in the open air, and they also envision their batteries as snap-together tiles that can be configured in any number of ways.

“We really do consider this a paradigm changer,” she said.

Co-authors of the paper are graduate students Charudatta Galande and Akshay Mathkar, alumna Wei Gao, now a postdoctoral researcher at Los Alamos National Laboratory, and research scientist Arava Leela Mohana Reddy, all of Rice; Rice Quantum Institute intern Andrea Miranda; and Alexandru Vlad, a former research associate at Rice, now a postdoctoral researcher at the Université Catholique de Louvain, Belgium.

The Advanced Energy Consortium, the National Science Foundation Partnerships for International Research and Education, Army Research Laboratories and Nanoholdings Inc. supported the research.







Source: Rice University