Deprecated: Function set_magic_quotes_runtime() is deprecated in /home/optimaldesign/ on line 18

Strict Standards: Declaration of Walker_Comment::start_lvl() should be compatible with Walker::start_lvl(&$output) in /home/optimaldesign/ on line 0

Strict Standards: Declaration of Walker_Comment::end_lvl() should be compatible with Walker::end_lvl(&$output) in /home/optimaldesign/ on line 0

Strict Standards: Declaration of Walker_Comment::start_el() should be compatible with Walker::start_el(&$output) in /home/optimaldesign/ on line 0

Strict Standards: Declaration of Walker_Comment::end_el() should be compatible with Walker::end_el(&$output) in /home/optimaldesign/ on line 0

Warning: session_start(): Cannot send session cookie - headers already sent by (output started at /home/optimaldesign/ in /home/optimaldesign/ on line 425

Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at /home/optimaldesign/ in /home/optimaldesign/ on line 425
Design Impact » Modeling

Extreme Energy Efficiency and Robotic Redesign

Earlier this week a presented a conference paper that goes over some work I’ve done recently involving squeezing every bit of energy efficiency out of robotic systems using the natural dynamics of these systems. Any physical system tends to move or vibrate in a specific way (think resonant vibrations of a wine glass), and this is what I’m talking about when I say ‘natural dynamics’. It turns out that we can use natural dynamics to our advantage to reduce dramatically the energy needed to do certain tasks. In this paper the application was robotics, but the idea extends to other domains as well.

You can get all of the details from the conference paper or the MATLAB code, but I thought you might enjoy this video that explains some of the highlights of the paper and includes some animations of the different robot designs compared in the paper:

Posted: April 28th, 2012 | Filed under: Design, Energy, Modeling, Optimization, Publications | No Comments »

Part IV: Semi-Autonomous Control Framework: Present Performance and Future Work

Please welcome back Sterling Anderson, a Ph.D. candidate at MIT, for the final post in his series on semi-autonomous driver assistance systems.

We’ve made it! Congratulations to all those who hung on through the first three posts in this series. Having done so, you are better prepared to understand and appreciate what I’m about to show you. For those tuning in for the first time (or you who decided to skip straight to the good stuff), welcome! The demonstration that follows should be sufficiently accessible that you’ll be able to appreciate, at least in part, what we’ve done here. If at any point you find yourself asking the question “wait a minute, don’t some cars already do this?” I would suggest you go back and read Parts 2 and 3 to understand the fundamental advances this framework provides when compared to the existing state of the art.


Vehicular accidents are costly. Not only do they end lives, injure travelers, and destroy assets, but they also inspire excessively large, heavy, and inefficient vehicles. Active safety systems can assist error-prone human drivers in avoiding accidents and thereby improve safety, efficiency, and cost. Active safety systems existing today are fundamentally limited in their inability to accurately quantify threat and intervene in more than one dimension to assist the human driver in avoiding it. As such, these systems must be implemented in an ad-hoc fashion, requiring significant fine-tuning to avoid conflicts in their sometimes-competing objectives.

What we have created is an integrated (read: ‘all-in-one’) planning and control framework that performs all of the functions of existing safety systems, in addition to predictively avoiding future hazards. This framework uses a fundamentally-new and incredibly-useful threat assessment method to predict the danger or ‘threat’ posed to the vehicle given its current state and the state of its surroundings. Based on this threat assessment, it then determines when, how, and to what degree it must intervene to ensure that the vehicle does not crash, lose control, or otherwise endanger its occupants. The controller is designed to allow the human driver as much control as possible in low threat scenarios and intervene only as necessary to keep the vehicle safe in high-threat scenarios. In the figures and videos that follow, I’d like to demonstrate a subset of the framework’s capabilities using figures and videos selected from the thousands of simulations and over 800 experimental trials that weíve used to vet it. Note that due to proprietary controls at Ford’s proving grounds, we were unable to record video of our Jaguar S-Type performing these maneuvers. Instead, we recorded telemetry data from each experiment and re-produced the results in high-fidelity simulation software (ADAMS/car).

Each of the videos below overlays the results from two simulations: the gray vehicle is controlled solely by a human driver model whereas the blue vehicle is also fitted with the semi-autonomous controller. In experimental trials, 8 different human drivers, each with different driving styles, were tested.


The experiments shown in the figure below illustrate the semi-autonomous controller’s ability to adjust its behavior to the preference and/or performance of the human driver. The upper plot shows the vehicle path as the driver drifted laterally in the lane (edges shown in gray). The lower subplot shows the proportion of available steering control assumed by the controller.

Note that by simply changing the threshold threat at which the controller intervenes, we can allow the human driver more or less control in low-threat scenarios (between X = 0 and 100 meters) without adversely affecting the controller’s ability to keep the vehicle safely within the lane in high-threat situations. Thus, an inexperienced or cautious driver might prefer more controller intervention all the time in order to smooth out mistakes, while a seasoned or more adventurous driver would prefer that the controller not intervene until this intervention was absolutely necessary. In the figure above, the red solid line represents an intervention function tuned to the more cautious driver while the magenta dash-dotted line shows the results of tuning the controller to more experienced driver. Notice that in both cases, the controller allowed the human to wander freely within the lane, while intervening as necessary to prevent unsafe lane departure. The black dashed line shows what happens when the controller is turned off.


The video below demonstrates the navigation framework’s performance in the presence of stationary hazards such as road edges, roadway obstacles (not shown), etc. In this simulation, the driver of both vehicles actively seeks to remain on the road surface — a difficult feat at 20 m/s (~44 mph).

Notice that including the semi-autonomous controller in the control loop not only keeps the vehicle stable, but also moderates the driver’s inputs in the process. Whereas the unassisted driver oversteers and loses control of the vehicle, the assisted driver notices that the vehicle is responding as desired and is thus more moderate in his steer commands. This allows him to maintain control of the vehicle. Moreover, allocating less than 50% of the available control authority to the controller (see green bar on the right) is sufficient to keep the vehicle on the navigable roadway and within 0.4 meters of the (invisible) line on the center of the roadway that the driver model is trying to track. The combined effect of both inputs (driver and controller) is a vehicle trajectory that more closely tracks the path the driver is trying to follow than the driver could accomplish on his own.

In scenarios where a drowsy, inattentive, or otherwise-impaired driver fails to steer around an impending threat, the semi-autonomous controller foresees the threat, gauges the control action necessary to avoid it, and if the driver does not respond appropriately, takes the necessary control to keep the vehicle safe. Once the threat has been reduced, it returns control to the driver. The video below demonstrates one such case.

In order to avoid moving hazards, the semi-autonomous framework predicts their future position and pre-emptively assists the driver in avoiding those regions of the environment. In both of the videos below, the human driver acts as though he doesnít see the vehicles up ahead (no steering input). In the first video, the controller recognizes that a passing opportunity is available and takes only as much control as necessary to execute that maneuver. The second video illustrates a slightly different case in which the yellow vehicle accelerates once the blue vehicle initiates a passing maneuver (weíve all known one). In this case, the controller behaves much like an alert driver would ñ seeking first to pass, then pulling back in behind the yellow vehicle as it accelerates.


I hope that the ideas discussed in this mini-series have provided a glimpse into the unique challenges and opportunities facing the emerging science of semi-autonomous control. While the issues and potential solutions weíve discussed in these four articles might seem a bit long-winded for a blog, they only scratch the surface of the technology, user studies, and legal infrastructure requirements that must be satisfied before these systems can be commercially implemented. Not the least of these considerations are driver acceptance issues. Almost everywhere I go to present this technology, one of the first questions I am asked is whether our system will come with an ‘OFF’ switch. Many people distrust the invisible face of automation and prefer to feel like they are in complete control. While we cannot completely concede the latter without sacrificing safety, we can certainly improve drivers’ perception and acceptance of autonomy by creating reliable, non-intrusive systems that modify driver inputs as little as possible while avoiding hazards. Significant work remains to be conducted in both human factors and usability studies before this research is road ready (my standard legal disclaimer), but I believe that at some time in the near future, it will be. Here’s to smaller, lighter, safer, and more efficient automobiles!


I’d like to thank Dr. James Allison for his invitation to contribute these articles. Writing them has been an exercise in making my research more understandable to non-technical readers. For those of you who would like more details (and believe me there are many), I would invite you to read any of the applicable papers/theses listed on my website. If you have further questions, or would like to continue the conversation offline, I would be more than happy to visit with you. Please feel free to send me an email and/or leave comments below.

Posted: February 12th, 2011 | Filed under: Design, Modeling, Sustainability, Transportation | No Comments »

Streamlined Water Distribution Systems, Engineering Design, and Optimization

Water and energy are scarce resources, the conservation of which is becoming increasingly important. Researchers at Wayne State University, lead by civil engineering department chair Carol Miller, are developing a computer-controlled approach for operating the Detroit water system. According to the Chicago Tribune, Miller hopes to reduce energy consumption for the system by shifting from manual control of system pumps, to automatic control. The water system is so large that these improvements stand to deliver significant energy savings. In addition, Miller estimates this will save ‘10 million tons of greenhouse gases and other pollutants per year’.

This is one of many engineering systems that can be viewed as a design optimization problem, and I would like to use water distribution system improvement as an example to explain what design optimization is.

In engineering design we have lots of decisions to make, decisions like what materials to use, the size of components in our system, or how parts of our system should work together. In mathematical optimization, we seek to minimize or maximize something by choosing the right values for some set of variables. Design optimization links engineering design with mathematical optimization in a way that helps us identify what design decisions will lead to the best possible engineering design.

How can we frame the operation of a water distribution system as a design optimization problem? We have three tasks to make this happen:

  1. Identify Design Decisions: each design problem has some degree of design flexibility. That is, designers are free to make decisions about certain aspects of their system. In new systems that are being designed from the ground up, there is a lot more design freedom (more decisions to make). If a system must use some already developed components, then some design decisions are already made, reducing design freedom. In the case of the water distribution system, there is even less design freedom, since the physical system already exists. In any case, designers need to identify what aspects of a system they have control over; these aspects are the design variables. One set of specific values for the design variables represents one system design alternative. In the water distribution problem here, the design variables are quantities that define how each pump in the system should be controlled.
  2. Specify Design Objective: We need to have some way of comparing design alternatives and evaluating which designs are better. A design objective, or objective function, is a system property that we can measure, and that reflects the usefulness of a particular design. The design objective drives the design process, and is a critical choice in product development. Whether or not design optimization is used formally, product designers choose a design objective, or at least set priorities for their product (influenced by the market segment they are targeting). For example, in automotive design, Porsche engineers have performance as a design objective, while Aptera engineers consider energy efficiency paramount. The resulting designs reflect the difference in design objective. In the water distribution system problem, we are seeking to minimize energy consumption.
  3. List Design Constraints: Engineering design is full of tradeoffs; that is, if we seek to optimize one thing, something else is bound to get worse. We can’t simply focus on the design objective alone and expect to develop a usable system. In the automotive example, some constraints include safety, size, range, and cost. In addition, by choosing energy efficiency over performance for a design objective, Aptera engineers still need to meet some minimal performance constraints. Who would want to buy a car so slow that it’s not driveable in traffic, even if it could acheive 500 mpge? If we sought to minimize energy consumption in the water distribution system problem without considering any constraints, we might arrive at a solution that says we simply should never turn on any pumps. We need to impose a constraint to make this work: require that water delivery needs are met.

The design optimization approach to engineering design involves minimizing or maximizing some design objective, while meeting a set of constraints, by varing something you have control over. This way of presenting an engineering design problem is actually pretty natural. Some engineers may be using the design optimization process informally, even if they are not aware of it. Design can be viewed as the process of finding the set of design variable values that satisfies the design constraints and optimizes the design objective. To summarize the water distribution system design optimization problem, we are trying to find an automated pump control policy that minimizes energy consumption, while ensuring water delivery needs are met.

Now that we have presented our design problem as an optimization problem, how do we actually solve it? In some cases, engineers could build physical prototypes and use a trial and error approach to search for the optimal design. This could get very expensive. A little more sophisticated approach might employ systematic testing and statistical models. This still requires expensive physical protypes. Would this even be practical for the water distribution system needs? Would engineers be allowed to try out new (untested) pump control ideas, risking water delivery failures for such a large metropolitan area? It sounds like we need some way to test design alternatives without actually having to test them in real life. This is where physics-based modeling and computer simulations come into play. Researchers and engineers have developed computer models for all sorts of systems that allow designers to test out ideas in a virtual world. These models help predict how a system design will behave, without actually having to build it. The software and computers are far from free, and the models are not 100% accurate, but they are accurate enough to help make design decisions, and allow designers to test out far more design alternatives than are possible with physical prototypes. If you would like to learn more about computer modeling, you can read through an ongoing series of articles on modeling.

If a system can be modeled using a computer simulation, then engineers can use optimization algorithms to solve the design optimization problem described above. These algorithms are computer programs that very intelligently choose what designs to test (using computer simulations) so that we can find the optimal design quickly. Using design optimization can help engineers develop better products in shorter time periods. Using optimization to develop better water distribution systems has actually been going on for several years. A full issue of the journal Engineering Optimization was devoted to this topic (you can read an overview of the issue here). In many of these articles, the engineers have additional design flexibility; they are not just looking at changing how the system is operated, but also at how the physical system is designed.

Design optimization and modeling are topics that I will revisit. These are important tools that could be used to transform how engineering design is done, and enable engineers to create systems that use much less energy, while meeting or exceeding our performance expectations. It’s my hope that more engineers adopt design optimization and use it to improve sustainability and quality of life, and that more people can become aware of design and design optimization, their impact on how we live, and the role they can play in our shift to a sustainable path.

Posted: August 2nd, 2009 | Filed under: Design, Energy, Modeling, Optimization | 5 Comments »

Introduction to Modeling II: Questions and Assumptions

This is the second post in a series on engineering modeling. In the first post, Introduction to Modeling I: Overview, I showed you a simple example system, a basic pendulum, that will be used to talk about some modeling concepts.


Before we start constructing any model, we need to create a list of questions the model is supposed to answer. In this pendulum example, we might want to know things like:

  • if the rod will be strong enough to support the weight
  • how the pendulum will move in a variety of conditions
  • what kind of force would be required to get the pendulum to move in a certain way

Here we will look at a model that addresses the first question. We will explore the other questions in later posts.

Much of the coursework engineers go through in college is dedicated to learning how to create models for a variety of systems that predict behavior to answer questions. One interesting activity engineers engage in is gathering knowledge about a system, whether from past school work, research publications, commercial software, or other resources, and integrating it all together into a model that answers important questions about a system, which in turn helps engineers make decisions. To answer our first question about the pendulum, we will look at material that engineering students might learn in classes on statics (the study of forces in things that are not moving) and solid mechanics (the study of what happens to solid objects when you apply force to them).

Every model is an approximation of a real system. Approximate models are based on assumptions about a system and its environment; if these assumptions were all completely true, then the model would be 100% accurate. In reality, assumptions are only partially correct. Adding more assumptions can simplify a model, but can also make it less accurate. An engineer must manage the tradeoff between model accuracy and simplicity. Many assumptions are reasonable to make, but engineers need to be careful or they might get very unexpected results from the real system. Have a look at what happened when bridge engineers assumed that vibrations caused by wind blowing across a bridge had no effect:

Let’s start off with a very simple model for our pendulum, and assume the following:

  • The weight has a mass of m = 5 kilograms
  • The 2mm thick rod is made of an aluminum alloy with a yield stress (explained below) of 20 MPa
  • The rod is much less massive than the weight at the bottom, so we can neglect the mass of the rod
  • The aluminum is homogeneous, that is, it has the same properties everywhere inside the rod. It has no spots in the rod that are weaker than others.
  • The pendulum is not moving

A solid object can break apart when the stress inside gets too high. You can think of stress as a type of internal pressure; it has the same units: force per area (PSI in U.S. customary units), just like pressure in a liquid or gas. In metric the unit for pressure is a Pascal (Pa), which is defined to be one Newton per square meter. The symbol normally used to represent a stress value is sigma (\sigma). Stress is a little more complicated concept than pressure in a fluid. Stress can be positive (compressive stress) or negative (tensile stress), and the direction of the stress matters. In the case of the pendulum, we will focus on just one type of stress: axial stress (\sigma_a), stress that occurs due to a force along the length of an object. The drawing below illustrates the idea of axial stress.


Because of gravity pulling down on the weight with a force of mg = 5 \mathrm{kg} \times 9.81 \mathrm{m/s^2} = 49.05 Newtons (where g is the acceleration of gravity), there is an internal axial force along the length of the rod of 49.05 N. In the drawing above I show an imaginary cut through the rod. Imagine yourself being in the middle of the cut. To keep the weight from falling, you would have to pull the two halfs of the rod together with a force of 49.05 N. Each cross section of the rod must also resist of force of 49.05 N. The cross-sectional area of the rod is A_c=\pi d^2/4, where \pi \approx 3.1416, and d is the diameter of the rod, which is 2 millimeters (mm), or 0.002 meters (m). This A_c=3.1416\times 0.002^2/4=0.0016 square meters of aluminum must resist 49.05 N of force without breaking at every cross-section of the rod.

Let’s think conceptually about axial stress in a rod. If the force on the rod goes up, then the stress inside goes up proportionately. If we want to reduce the stress, we can make the rod thicker. This line of thinking is reflected in a very simple model for axial stress: \sigma_a=T/A_c, where T is the tension in the rod. We can rewrite this equation, our model for stress, in terms of the rod diameter and the mass at the bottom of the pendulum: \sigma_a=4mg/\pi d^2. We can see from this equation that increasing mass increases axial stress, while increasing the rod diameter reduces stress.

Different materials have different tolerance for stress. We say that a material that can handle more stress than another is stronger. This material property can be quantified using something called yield stress. A material will yield, or stretch past the point in can return back to its normal shape, when the stress inside exceeds its yield stress. A material will break when the stress reaches an even higher level, called the ultimate strength or rupture stress of the material. We are assuming that the rod here is made of a type of aluminum that will yield when the stress exceed 20 MPa (mega-Pascals: one mega-Pascal is one million Pascals). We can determine the yield stress and rupture stress of a material using a machine like the one in the video below (something happens at 53 seconds):

By plugging numbers for mass and diameter into our equation for stress, we can calculate that the axial stress in the rod (when the pendulum is not moving) is 15.6 MPa, which is less than the yield stress of 20 MPa, so our model predicts that the rod will in fact be strong enough to support the weight. Congratulations! We have answered the first question using an engineering model.

When might our model for whether the rod will be strong enough break down? What conditions can you think of that would cause some of the assumptions not to hold? What things does the model not account for?

Posted: May 15th, 2009 | Filed under: Modeling | 1 Comment »

Losing our Edge in Simulation

In news relating to my series of posts on modeling, a new report, the ‘International Assessment of Research and Development in Simulation-based Engineering and Science‘, was released recently. In short, the U.S. is losing our edge in modeling and simulation. This is a core technology that allows engineers and scientists to explore designs or conditions that are too expensive, too time consuming, or too dangerous to test using a physical prototype. This is an issue important enough to our future in terms of both sustainability and economic competitiveness that we need to refocus our efforts on modeling and simulation expertise. Perhaps if there was more discussion of what modeling is used for and how it helps improve the lives of everyone, then it could attract a little more interest and be given a higher priority. I’m not saying we need to be discussing the details of how simulations work around the kitchen table, but we do need more people talking about what it is and why it is important to our future. We need to make the link clear between modeling (as well as other advanced design technologies) and positive impact on humanity, and perhaps inspire some to pursue engineering and science careers as a way to make a meaningful difference.

What are some good examples of how simulation has helped humanity? What do you think we can do to advance our expertise in simulation and modeling?

Posted: May 11th, 2009 | Filed under: Modeling | No Comments »

Introduction to Modeling I: Overview

Finally! This is the first ‘core’ material post for Design Impact. One of the most fundamental concepts in engineering design is that of an engineering model, that is, something that approximates the real behavior of a product or system without actually having to build it. This post is the first of a series that will introduce you to what engineering models are and how they are used in design. Have a look at this earlier post to read a basic definition of engineering design. I will use a simple example (a basic pendulum) in this series to illustrate some modeling concepts. I will assume that readers can follow a little algebra related to the example model, but I will try to keep most explanations graphical and conceptual. If anything is not clear please speak up! You can give your feedback by commenting on this post or by sending me an email.

When engineers design something, they start out by making a list of things that their product or system needs to do, and limitations their design must abide by. This list of requirements might specify things like cost limitations, size, weight, or a host of possible performance metrics. Engineers hope to find a design that meets these product requirements the best way possible. Sometimes it is a real challenge just to find any design that meets all the requirements simultaneously. Design is an iterative process where new designs are proposed, evaluated, and then modified. This process repeats until an acceptable design is found. Here is a (simplified) depiction of the engineering design process:

Engineering Design Process

Engineering Design Process

Engineers have a few options for testing out how well proposed designs do at meeting product requirements. The most obvious is to build a physical prototype of a proposed design and test it out. Depending on the product, this can get very expensive (and time consuming). In some cases, it may impossible to build a physical prototype, or if you can build one, the tests you need to do are impractical while still in the design stage. Engineers need to be able to make predictions about how a design will perform without having to actually build and test it. This is where modeling comes in. An engineering model approximates the behavior of a real system, but is less expensive or time consuming to create and use. Notice the word approximation: there is always some error between how a model behaves and the real system. More sophisticated models reduce this error, but are more difficult to create and use. Engineers must manage the tradeoff between model accuracy and expense: they need to choose a model that is accurate enough for their needs. On the other hand, models that are substantially more sophisticated than the design project requires could end up costing more in terms of development time, design and computing resources, and other expenses.

One option for an engineering model is to build a smaller or simplified physical prototype. This does save some time and expense over a full-scale prototype, but can still be costly. Another class of models are ‘virtual’; engineers can build virtual prototypes that can be tested on a computer, which is typically much faster than testing physical prototypes. Recall that design is an iterative process where many designs must be tested before determining the final design of a product. Consider the impact of virtual prototyping on the design process. If a computer model takes seconds to evaluate, while physical prototypes require days, weeks, or even longer to construct, what happens when engineers start using virtual prototypes? Design development time collapses. Even if the last few design iterations involve physical prototyping, the overall process is shortened dramatically. There are many other benefits to using appropriate computational models throughout the design process that we will explore in later posts. Unless I specify otherwise, when I use the term ‘model’ from here on out I am referring to a computational engineering model, that is, a virtual prototype.

I am going to use a simple model of a basic pendulum to introduce some modeling concepts in subsequent posts. Here I am just going to describe the physical system. A pendulum may not be the most exciting example to start off with, but it works very well for introducing important ideas using a single example. (So for now just pretend that the pendulum is a small part of a much cooler example). In the drawing below we have a metal rod that is hanging from a pivot that lets the rod move back and forth in one direction. The rod has a cylindrical cross-section with diameter d, and is supporting a heavy object below. This object has a mass m [1]. When the pendulum is swinging back and forth the object has a velocity v [2], and when the rod is not straight up and down we can measure how far it has moved to the side with the angle theta θ.

Basic Pendulum

Basic Pendulum

What do think about the idea of using ‘virtual prototypes’ in engineering design? Can you think of any cases where physical prototypes might be impractical or impossible to use?

[1] The mass of an object is different from it’s weight. The mass refers to how much stuff there is an object, and weight specifically means how much force gravity is pulling down on an object. In some calculations we need to use the mass of an object and not its weight. The mass and weight of an object are related to each other by the acceleration of gravity g, which depends on where an object is. At the surface of the earth g is 32.2 feet per second per second (ft/sec2), or using metric units (which make calculations easier in most cases), g is 9.81 meters per second per second (m/s2). An objects weight is its mass times the acceleration of gravity. So if we are on earth, and an object has a mass of 10 kilograms, then gravity pulls down in it with a force of 98.1 Newtons (N). The metric unit of measurement for force is the Newton. In other words, the object weighs 98.1 N (or about 22 pounds). If we were on the moon, where g=1.62 m/s2, then the object would then only weight 16.2 N (about 3.6 pounds), but it would still have the same mass of 10 kg. You can read more details here.

[2] The velocity of something is both its speed, and the direction in which it is headed. In the second diagram of the pendulum the velocity of the mass is described graphically by the arrow labeled by v. The object’s speed is proportional to the length of the arrow, and the direction it is moving is decribed by the direction the arrow is pointing. In this pendulum, the velocity of the mass at the bottom of the pendulum is always in a direction perpendicular to the rod. For more details about velocity, click here.

Posted: May 10th, 2009 | Filed under: Design, Modeling | No Comments »