The Center for Data Innovation spoke with Mark Davis, the Senior Director for Design Research at Autodesk. Davis discussed his work on goal-directed design software, which allows designers to specify characteristics that an object must have and then algorithmically generates designs that fit the specifications. Davis also discussed why he thinks the technology will empower, not replace, human designers.
This interview has been lightly edited.
Travis Korte: Can you introduce your department at Autodesk and talk a little bit about the goals of Project Dreamcatcher?
Mark Davis: We’re looking at customer needs that are not yet explored by the typical portfolio that the company has. We try to stay four, five, ten years ahead of customers’ actual needs. We like to talk specifically with non-customers to understand what their needs in the future will be around design tools.
Dreamcatcher is an internal code name for our research project in the larger space of goal-directed design. The main distinction between a goal-directed design approach and the typical approach today is that goal-directed design is a 180-degree flip: rather than coming up with a solution and describing it geometrically like you might in some of our tools today, the goal-directed design approach starts with the definition of a problem. So rather than jumping to a geometric solution, you define and set up the problem for the computer to help you explore. You would talk about the goals of the design and the constraints and any variable ranges and then we’d serve up this workflow that is really an empowerment to designers.
First, we want to understand the design intent: what you are trying to do in terms of a solution. We can do this in a number of ways, including through sketch input, natural language processing, search, and clustering algorithms. The second stage sees the computer starting to generate potential solutions. This is massively parallel computing applied to look at objectives and variables that either conflict with each other or are in some kind of complex relationship that’s too complex for the brain to understand. In the third phase, the designer gets back potentially hundreds of thousands of viable solutions, and needs a way to sort and filter a vast array of possibilities but also a way to override anything the machine suggests with human concerns or aesthetic concerns. The designer is like a mixing board: “I realize I need to pick between the weighting on these variables that conflict, so I’m going to override the choice with an aesthetic concern or something that’s going to sway the presentation of viable solutions.” The last phase is about helping the designer with manufacturability. What’s the best way to manufacture the design, whether it’s additive, subtractive, mold injection, et cetera? It’s important to be able to specify the manufacturing tools and have them be considered early in the design phase so that only options that are actually possible with the equipment the user has can be presented. And then what is actually the best design for the manufacturing process you’ve chosen? What’s the design to get the cleanest part out of a mold injection machine, for example?
TK: You spoke a bit about how goal-directed design is particularly useful for highly complicated tasks or situations that humans find it hard to comprehend. Can you give me some examples of situations you’re talking about?
MD: On the front end, goal-directed design is going to have the designer consider options they wouldn’t normally consider. It’s going to reveal areas of solutions the human brain wouldn’t necessarily pursue. On the back end, it can resolve many more variables than a designer can hold in their head, so it can optimize against all sorts of criteria. A couple examples: in the architectural space, if you load up a problem in terms of building performance, you’ve got orientations of the sun and climate and glazing and material selection and structural conditions and cost variables. All of a sudden you’ve got six or seven very complex interdependent variables to keep track of and no way to visualize how they perform against each other. If you want to make tradeoffs against a variable or loosen a constraint or take a cost variable out, it’s really difficult to visualize that. But a computer can do the computation of all those options against each other and then have some user interaction after it’s done. In the mechanical space, it’s the same sort of thing. You’re looking for a particular performance of a part. In aerospace or automotive it’s always strength against weight. But there can be many more variables put into the problem exploration: cost, availability of materials, manufacturing method, supply chain. Particularly with additive manufacturing this opens up a huge range of possibilities for the exploration of new materials. Now that we can design down to the voxel level it allows blending materials that have never been blended before. You can look for structural or performance characteristics of materials that actually don’t exist. You can define the structural and performance criteria you want and have the computer figure out what blend of materials will deliver on that requirement. It’s actually a pretty fundamental change.
TK: You mentioned the wide variety of input methods designers can use to input their design specifications. Talk a little about the motivation behind having such a versatile input system.
MD: The front end of the system is really the crux of getting something useful out of the computation. There’s probably a dozen possible input methods that we’re exploring and frankly three quarters of them have been tried over the past decade and have failed: natural language is a great example where it can get sort of 90% of the way there through machine learning algorithms and such. But even in academic research, they reach a certain point of success and don’t go any further. But there’s a lot of other possibilities we’d like to explore that have just been possible in the last four or five years. Image search and metadata underneath images is one way. We’re certainly not discounting the input of geometry as well. If someone’s trying to design a new version of a product, they could still start with last year’s version of the product. They can also load a parametric model into the system.
In the previous instantiations of research I’ve seen in this area, there’s so much definition required upfront that by the time you put all the definition in you’ve basically done the math to solve the problem anyway. So what we’re trying to do to keep the exploration wide in the beginning is allow several different methods to specify design intent and lower the bar to specifying design intent. If you’re in one of our professional level tools, typically those are only operationally possible by a very small set of experts in an organization. If you look at a large organization, there’s the engineering department and then the optimization department and then usually a very elite group of specific optimization engineers who understand the problem space sufficiently to describe the solution. This keeps the access to the tool very limited. What we want to do is not only make it easier for professionals to approach goal-driven design, but also open it up to many more people. We have to lower the bar at this critical input stage: how can somebody describe the problem they’re trying to solve in a way that doesn’t require them to offer an algorithm?
TK: What are the commercialization prospects for this technology?
MD: Being a research project gives it a lot of flexibility to pursue paths that might not be commercially viable, to absorb risk that product teams might not want to absorb. Projects like this are in a unique condition, not being under the pressure for commercial viability. But it also allows them, when technology is developed, to enter the company and the product teams and the portfolio in a really large number of ways. So we may invent some algorithms out of this work that can go across the portfolio into different simulation or optimization products we have. We may come up with a capability in terms of a user interface that could show up in products or as a plugin within a suite. We may develop a standalone application that’s aimed at more of a consumer focus rather than professional users. We don’t have any set plans for how we’ll benefit from the research, and that’s actually one of the key benefits: not having to make those decisions upfront. As the customer need grows we have opportunities to introduce the technology at a really granular level or wrapped up into a product offering. We haven’t announced how we’ll do that integration yet.
TK: A recent Fast Company article that featured Autodesk’s work on goal-directed design asked whether the technology would render designers obsolete. What’s your opinion on that?
MD: What I didn’t like so much about the approach in the Fast Company article is the tagline seemed to be it was going to be a replacement for the designer. Any of the customers and early adopters we’ve talked to can tell you they’re not interested in being replaced. It’s much more an opportunity for a designer to have a new capability, a new set of tools. We’ve heard customers describe it as a reliable smart person in the brainstorm session. And so I didn’t follow the premise that it’s going to replace designers.
I liked the article because it’s getting this kind of thinking out into the mainstream. But it’s a little hard to express to a general audience how this is a radical departure from how people design products today. Today, designers have something in their head, they put a version of it into a digital tool and then sometimes they simulate it, and even less frequently they optimize it. They stop designing a product when they run out of time or money. Whatever stage they’re at in the design becomes the product they build. The opportunities and possibilities with goal-directed design technology to eliminate the impossible in the design space early on and be able to do this kind of exploration in the conceptual design phase is going to open up whole worlds of possibilities for new types of design.
The second point is that typically, at least in our user research we’ve done so far, customers who might benefit from our work ask how this new capability and this new technology fits into existing workflows and processes and the types of things being designed today. That is a totally important question, but it’s more important to talk about designing differently and designing different things. It’s marginally interesting to improve the manufacturability and the range of options you can pursue and that sort of thing and to optimize against those processes. For me it’s much more exciting to think about the new types of things you can design now that you couldn’t before. Additive manufacturing opens up a whole new world of possibilities for things that simply weren’t able to be manufactured before. Designing different types of objects that take advantage of the new capability is hard to wrap your head around. When they talk about designers being supplanted or the designer being replaced with algorithms, that’s more in the old mindset. “We’ve got an existing process, workflow and set of things we design. Uh-oh, here comes this technology, is it going to be a threat?” No. It’s a much more fundamental shift than that.