Ford vs. Mazda Transmissions: Why Does Quality Matter?

In the 1980’s Ford owned roughly 25% of Mazda (then known as Toyo Koygo). Ford had Mazda manufacture some automatic transmissions for cars sold in the United States. Both Ford and Mazda were building the same transmission off of the same specification and both had 100% specification conformance. However, the Ford transmissions were receiving more customer complaints about noise and were having higher warranty repair costs. This led Ford engineers to investigate and they found that the Ford manufactured transmissions utilized 70% of the available tolerance spread for manufactured parts, while Mazda used only 27% (AC 2012-4265: Promoting Awareness in Manufacturing Students of Key Concepts of Lean Design to Improve Manufacturing Quality). The Ford engineers began to realize that the Mazda transmissions were higher quality than the Ford manufactured ones. It turned out that Mazda was using a slightly more expensive grinding process than what Ford was using. This raised Mazda’s manufacturing costs, however the full lifetime costs were higher for the Ford manufactured transmissions.

This story is a prime example of why it is important to think about quality. Too often we tend to focus on other metrics and neglect quality, or we use a single metric to define quality. Ford experienced this by focusing on a “Zero Defect” policy, thinking that if there were zero defect in a transmission that would produce a quality transmission. Mazda expanded on this policy and took the whole lifecycle cost and experience into consideration as they developed their transmissions. With this holistic view, it is easy to see why engineers need to think about quality all across a program’s lifecycle.

Building Quality into The Lifecycle

If the goal of an organization is to deliver a quality product, engineers at all stages need to think about how they can add quality into the system. An easy way to think about how to add quality, is ask yourself: “What are the extra details, the extra effort, the extra care that can be put into the product?” When these extra efforts are applied to a properly defined system, the output is often a quality system. To a program manager all the extra effort sounds like a fair amount of extra cost. This is true, however it is important to weigh the short term cost increase against the potential long term costs savings. Below are two examples of how to add quality in the lifecycle.

Design

One of the first steps of the design effort is requirement building and unfortunately having a requirement like “system shall be of a quality design” does not cut it. Never mind that this requirement violates nearly all the good requirement rules, it fails to take into account the characteristics of a quality system. Is it the “spare no expense” engineering efforts of high end audio systems or is it the good quality for the price factor of Japanese manufactured cars in the 1970s? It is important to identify how the customer and market defines quality. Having this understanding informs choices going forward and prevents a scenario where the market doesn’t value the added quality efforts.

Procurement/Manufacturing

The procurement/manufacturing phase of the lifecycle is where quality efforts are the most visible. As parts are being ordered it is important to be thinking about how the whole supply chain thinks about quality. This involves reviewing the supplier’s suppliers to verify that the parts being delivered do not have a poor design or a possible defect that could be hidden through integration. For internally manufactured parts, is extra effort being added to check that the solder on pins is clean and will not short other sections under heating? Extra thought and care should be given to the human interface of the system, as this normally plays a major role in determining the quality of a system. For software, do user interfaces make sense, do they flow, are they visually appealing? These are the kinds of questions that should be asked to help guild engineers to building a quality system.

 “Quality Is Our Top Priority”

All too often I find a Scott Adams’ Dilbert comic strip that highlights a common problem that engineers face. In the comic below we have a perfect example of Pointy-Haired Boss directing Dilbert, Alice, and Wally to focus on quality.

Dilbert

DILBERT from Sunday March 28, 2004

 

What Pointy-Haired Boss fails to realize is that quality and the rest of his priorities are not mutually exclusive and can be done concurrently. A quality system is one that is safe, that is law abiding, and is financially viable. Quality should also be added to these factors, making sure that the extra bit of design work is worthwhile. All of these factors when properly combined together with good design and engineering produce a quality system.

By Daniel Hettema. Reposted from the SPEC Innovations’ blog with permission.

PLM Moving to the Cloud

Why Product Lifecycle Management Is Moving to the Cloud

“The cloud” means many things to many people. It’s a common misconception that the cloud is the Internet itself. They think that all the information they put on the cloud can be easily “hacked,” so they see this as a very public thing. But for those who work in cloud computing, they see it as a means to deliver safe, secure services to more people, at a lower cost. You can share computer resources, including CPU power, memory, and storage. This sharing or “on-demand” use of computer resources means that you can pay less for those resource, than when you have provisioned them on your own.

To take advantage of this resource sharing you must use applications that take this new environment into account. Just using a client-server or desktop tool with a “web front end” does not work well. The application programmers must re-architect their code to take advantage of this new capability and at the same time deal with the problems, such as latency, since now you must pass data between the servers where the data is stored and the web browser on the client machine you are using. Those servers may be down the hall or a few miles away, so there can be substantial delays in data transmission.

Scalability

Most desktop/client-server tools assume very little latency, so they grab a lot of information at a time and put it into local memory. That’s fine when you are close to the data, but in cloud computing the servers could be anywhere in the world or at least across the continent. So, when people try to use a desktop tool in this new environment, they begin to breakdown quickly in terms of response time. Another way to say this is that these tools do not scale to meet the growing needs. But the whole idea of cloud computing is to allow the application to scale to meet the needs.

Collaboration

Cloud computing also enables world-wide collaboration. So now the need to scale becomes critical, as more and more people are working together and capturing/generating more and more information. A “web-based” tool must be designed to process more information locally, including visualization of the data. Otherwise, we are back to central computing, where you had a dumb terminal connected to a computer often far away. I can still remember how slow the response was when that occurred. Even though the “bandwidth” has grown to Gigabits per second, we are trying to move Terabytes of information.

PLM on the Cloud

So, what does all this have to do with Product Lifecycle Management (PLM)? PLM today requires a large amount of data, analytical tools to transform data into information, and personnel who collaborate to create the products. Clearly, PLM would benefit the most from this new cloud computing environment. But where are the cloud computing products for this market? Legacy tool makers are reticent to re-architect 100s of thousands of lines of code. Such an effort would take years and be very expensive only to compete with themselves during the transition. So, most have created some “web front-end” to provide limited access to the information that exists in the client-server or (worst case) desktop product.

Innoslate® is the rare exception in the PLM marketplace. Innoslate was designed from scratch as a cloud computing tool. The database backend persists the data, while the web front end visualizes and performs the necessary analyses, including complex discrete event and Monte Carlo simulations. Innoslate support all areas of PLM, from Systems Engineering, to Program Management, to Product Design, to Process Management, to Data Management, and more. All this in one simple, collaborative, scalable, and easy to use tool. Check out www.innoslate.com for details.

What is a PLM Tool?

Product Lifecycle Management (PLM) software integrates cost effective solutions to manage the useful life of a product.

PLM software tools allow you to:

  • Keep aligned with customer’s requirements
  • Optimize cost and resources through simulated risk analysis
  • Reduce complexity with a single interconnecting database
  • Improve and maintain quality of a product throughout the lifecycle

The areas of PLM include: five primary areas:

  1. Systems engineering
  2. Product and portfolio m² (PPM)
  3. Product design (CAx)
  4. Manufacturing process management (MPM)
  5. Product data management (PDM)

Let’s look at each area in more detail.

Systems Engineering

A PLM tool should  support the system engineer throughout the lifecycle by integrating requirements analysis and management (Requirements View and checker), with functional analysis and allocation (all the SysML diagrams, along with LML, IDEF0, N2, and others), with solution synthesis (SysML, LML, Layer Diagram, Physical I/O, etc.), test and evaluation (Test Plans, Test Center), and simulation (discrete event and Monte Carlo). Many PLM tools lack the combination of all these capabilities in this area. Innoslate® was made by systems engineers for systems engineers and is designed for the modern cloud environment that enables massive scalability and collaboration. No other PLM tool has Innoslate’s combination of capabilities in this area.

Product and Portfolio Management

PPM includes pipeline, resource, financial, and risk management, as well as change control. Innoslate® provides all these capabilities and a simple easy to use modeling diagram to capture the business processes, resource and cost load them, and then produce Gantt charts for the timeline. The Monte Carlo simulation also enables the exploration of the schedule and cost risks due to the variation of timing, costs, and resources. This approach is called Model-Based Program Management (MBPM); consider it an important adjunct to the systems engineering work.

Innoslate® also captures risks, decisions, and other program management data completely within the tool using Risk Matrices, and other diagrams. Change control is provided through the baselining of documents, the branching/forking capability, and the object-level history files.

Innoslate® provides a means to develop a program plan that can be linked to the diagrams and other information within the database This feature enables you to keep all your information and documents together in one place.

Product Design

The Product Design area focuses on the capability to capture and visualize product design information from analysis to concept to synthesis. The Asset Diagram enables the addition of pictures to the standard boxes and lines diagrams. This capability enables the development of the high-level concept pictures that everyone needs. Innoslate’s CAD viewer feature allows you to not only view the STL and OBJ files it can create the equivalent Asset Diagram entities through the OBJ file, but this feature also makes the integration between the two tools more seamless. Other physical views, such as the layer diagram and physical I/O help view the physical model in ways that usually required a separate drawing tool.

Manufacturing Process Management

Innoslate provides great process planning and resource planning capabilities using the Action Diagram and other features discussed above. Direct interface from Innoslate® to other tools can be accomplished using the software development kit (SDK) application programmer interfaces (APIs). If the MPM tools have Internet access, you can use the Zapier Integration capability, which provides an interface to over 750 tools, ranging from GitHub to PayPal to SAP Jam Collaboration. In addition, Innoslate is routinely used for Failure Modes and Effects Analyses (FMEA), which is critical to MPM.

Product Data Management

Capturing all the product data, such as the part number, part description, supplier/vendor, vendor part number and description, unit of measure, cost/price, schematic or CAD drawing, and material data sheets, can easily be accomplished using Innoslate. Most of the entities and attributes have already been defined in the baseline schema, but you can easily add more using the Schema Editor. You can develop a complete library of product drawings and data sheets by uploading electronic files can be uploaded as part of the “Artifacts” capture in the tool. Construction of a Work Breakdown Structure (WBS) and Bill of Materials (BOM) is also simply a standard report from the tool.

As a systems engineer, it’s important to allow information to drive your decisions. This can only be obtained through detailed functional analysis and an underlying scalable database. And is best accomplished with a PLM software tool that  encompasses all 5 areas illustrated above. Innoslate meets or excels in every area, so you are better equipped to face high-risk decisions.

Do Frameworks Really Work?

Over the past 20 years, we have been using various “Frameworks” to capture information about architectures and systems. Beginning perhaps with the Zachman Framework, which is still one of the most successful. The Zachman Framework is timeless for this reason: it forced us to think about the standard interrogatives (Who, What, When, Why, Where, and How) and different perspectives (Executive, Business Management, Architect, Engineer, and Technician) as applied to the enterprise. The idea was to build models in each area, starting with simple lists and working into detailed models of the systems at a component level.

Other frameworks followed, included the DoD Architecture Framework (DoDAF – which came from the C4ISR Architecture Framework), the Ministry of Defence Architecture Framework (MODAF), the NATO Architecture Framework, and many others. All these Frameworks were built on the idea that the information could be easily “binned” into these boxes, but as we know many of the models in these Frameworks included maps between different data elements, such as the CV-6 from DoDAF that maps Capabilities to Operational Activities. Recently, the Object Management Group (OMG) and others have been trying to push out a Unified Architecture Framework (UAF) that combines the models from many of these other frameworks into a single, very large framework. OMG is also responsible for developing the Unified Modeling Language (UML), Systems Modeling Language (SysML), and the Unified Profile for DoDAF and MODAF (UPDM).

All of these framework and languages are ways to capture information about a system or enterprise and show them as a set of data, often as a picture or model with two or more pieces of information in the model. An example is the DoDAF OV-5a, which is a functional hierarchy of Operational Activities, which only contains a single entity class and the decomposition relationship between those entities. Another example is the OV-5b, which shows the Operational Activities and the “Resource Flows” (which many of us recognize as inputs and outputs to/from the Operational Activities) between them. Thus, we have now two entity classes and the associated relationships between them. Obviously, the information in the CV-6, OV-5a, and OV-5b overlap in that the same Operational Activities need to show up in each of these models. But how many of these different models would we need to complete describe a system or enterprise?

An alternative way to capture information is to use an ontology (and the DoDAF 2.02 is based on a rather large ontology – the DoDAF MetaModel 2.0 or DM2) that captures the information in a finite number of classes (or bins) and a set of relationships between these classes. The classes and relationships can have attributes associated with them, which are also pieces of information. At the ontology level, all we see is a mass of data, so most of us want to see the pictures, and most architects and system engineers seem to prefer this approach.

An alternative to the DM2 is the Lifecycle Modeling Language (LML), which contains both an ontology and a set of diagrams to capture the necessary technical and program information associated with a project. This language uses an ontology that appears simpler than the DM2, but actually hides the complexity through the large number of relationships between entities and the fact that the relationships can have attributes associated with them. LML purports to be the 80% solution to the ontology, meaning that you may decide to extend it to meet your specific needs. But let’s just stick with it. LML has 20 classes of entities, and each entity class has a number of relationships associated with it (over 40 in total). So, if we ignore the attributes, we have over 20(20-1)/2=190 combinations of information possible. Does that mean I need 190 or more diagrams to visualize all this information, perhaps – it could be less, but it could be more. Can we really have that many different diagrams to represent this information, which is what a Framework would require? And of course, if we add in all the attributes to both the classes and relationships, then we are trying to display a lot more information than this.

So, Frameworks are a useful starting point, which is how John Zachman uses his framework, and it may be enough for enterprise architecture, but it’s not a panacea for all of systems engineering problems. Sooner or later as we want to decompose the system to a level we can determine what to buy or build, as well as manage these kinds of projects, you likely will need to use a more robust approach. LML and this kind of ontology makes it much easier to capture, manage, and visualize the information. See for yourself. Go to www.innoslate.com and try out the tool that uses LML for free. Explore the Schema Editor to see the entire ontology. Play with the “Getting Started” panel examples. I think once you do you will find this approach works much better than the Frameworks. In addition, Innoslate has a “DoDAF Dashboard” that enables you to create DoDAF models directly from the dashboard, so if that’s what you are more familiar with, you will find it the easiest to get started. Notice that may of the other projects are automatically populated with the information from the other models. That’s because Innoslate reuses that information to automatically create the other views!