Ford vs. Mazda Transmissions: Why Does Quality Matter?

In the 1980’s Ford owned roughly 25% of Mazda (then known as Toyo Koygo). Ford had Mazda manufacture some automatic transmissions for cars sold in the United States. Both Ford and Mazda were building the same transmission off of the same specification and both had 100% specification conformance. However, the Ford transmissions were receiving more customer complaints about noise and were having higher warranty repair costs. This led Ford engineers to investigate and they found that the Ford manufactured transmissions utilized 70% of the available tolerance spread for manufactured parts, while Mazda used only 27% (AC 2012-4265: Promoting Awareness in Manufacturing Students of Key Concepts of Lean Design to Improve Manufacturing Quality). The Ford engineers began to realize that the Mazda transmissions were higher quality than the Ford manufactured ones. It turned out that Mazda was using a slightly more expensive grinding process than what Ford was using. This raised Mazda’s manufacturing costs, however the full lifetime costs were higher for the Ford manufactured transmissions.

This story is a prime example of why it is important to think about quality. Too often we tend to focus on other metrics and neglect quality, or we use a single metric to define quality. Ford experienced this by focusing on a “Zero Defect” policy, thinking that if there were zero defect in a transmission that would produce a quality transmission. Mazda expanded on this policy and took the whole lifecycle cost and experience into consideration as they developed their transmissions. With this holistic view, it is easy to see why engineers need to think about quality all across a program’s lifecycle.

Building Quality into The Lifecycle

If the goal of an organization is to deliver a quality product, engineers at all stages need to think about how they can add quality into the system. An easy way to think about how to add quality, is ask yourself: “What are the extra details, the extra effort, the extra care that can be put into the product?” When these extra efforts are applied to a properly defined system, the output is often a quality system. To a program manager all the extra effort sounds like a fair amount of extra cost. This is true, however it is important to weigh the short term cost increase against the potential long term costs savings. Below are two examples of how to add quality in the lifecycle.

Design

One of the first steps of the design effort is requirement building and unfortunately having a requirement like “system shall be of a quality design” does not cut it. Never mind that this requirement violates nearly all the good requirement rules, it fails to take into account the characteristics of a quality system. Is it the “spare no expense” engineering efforts of high end audio systems or is it the good quality for the price factor of Japanese manufactured cars in the 1970s? It is important to identify how the customer and market defines quality. Having this understanding informs choices going forward and prevents a scenario where the market doesn’t value the added quality efforts.

Procurement/Manufacturing

The procurement/manufacturing phase of the lifecycle is where quality efforts are the most visible. As parts are being ordered it is important to be thinking about how the whole supply chain thinks about quality. This involves reviewing the supplier’s suppliers to verify that the parts being delivered do not have a poor design or a possible defect that could be hidden through integration. For internally manufactured parts, is extra effort being added to check that the solder on pins is clean and will not short other sections under heating? Extra thought and care should be given to the human interface of the system, as this normally plays a major role in determining the quality of a system. For software, do user interfaces make sense, do they flow, are they visually appealing? These are the kinds of questions that should be asked to help guild engineers to building a quality system.

 “Quality Is Our Top Priority”

All too often I find a Scott Adams’ Dilbert comic strip that highlights a common problem that engineers face. In the comic below we have a perfect example of Pointy-Haired Boss directing Dilbert, Alice, and Wally to focus on quality.

Dilbert

DILBERT from Sunday March 28, 2004

 

What Pointy-Haired Boss fails to realize is that quality and the rest of his priorities are not mutually exclusive and can be done concurrently. A quality system is one that is safe, that is law abiding, and is financially viable. Quality should also be added to these factors, making sure that the extra bit of design work is worthwhile. All of these factors when properly combined together with good design and engineering produce a quality system.

By Daniel Hettema. Reposted from the SPEC Innovations’ blog with permission.

How to Best Migrate Your Data to Innoslate

Image result for migrating

Leaving your old tool might seem daunting, but Innoslate is a modern, cloud-computing tool that makes moving your data simple. Here are a few simple step to make migrating your data to Innoslate easy.

 

#1 Organize Your Data

Assess how much data you really want to move into a Innoslate. It’s just like moving from your home. Get rid of the stuff you don’t need and only move the really valuable items. You may find there is very little data that needs to be moved into a new tool. This is true of existing, long term projects, as well as new ones. By getting rid of the clutter, you can improve your practice significantly.

 

#2 Extend the Schema to Meet Your Needs

During step #1, you’ll want to make note of your project’s schema. Make sure that Innoslate can accept the schema changes you’ve made and easily import the data from a CSV file or other format. See how to make changes to the schema here: https://help.innoslate.com/users-guide/database/schema/ Innoslate offers  

 

#3 Check Your Data’s File Types

Innoslate’s Import Analyzer can accept many different file types. The Import Analyzer provides drag and drop import capability for .csv, .docx, .xml files. There is also an advanced importer for .xmi, .pdf, and .txt files. Innoslate already has built-in analytics to replace their functionality or at least has an open Software Development Kit (SDK)/Applications Programmer Interfaces (API) that allows you to replicate that functionality. If you were previously using modeling tools, you should use one that has XMI or some other capability to read files exported from your existing tool. Note that since you may be coming from what is essentially a drawing tool into a data-driven tool, you may have to redraw some of the diagrams. That’s really an opportunity to make sure that the diagrams do not contain errors. You can test the results of your models to ensure accuracy using the Discrete Event simulator.

 

#4 Archive Your Data

After you have imported all of your data into Innoslate, create a copy of your project. This is a great safety measure. You can save your data here.

 

#5 Baseline All Your Documents

Then make sure to baseline all the data that you originally imported into Innoslate.  https://help.innoslate.com/users-guide/documents/requirements/baselining/ Baselining allows you to create You can create copies of your original project.

 

#6 Need Help, Ask For It

With the new software as a service (SaaS) model, support is part of the package and not an additional charge. Use that support to help you in this move. Think of them as movers you have already paid for. Contact support at support@innoslate.com.

If you want to learn more about innoslate, request or free trial or contact us. If you want to see a tool that has all these features and provides the support you need, then check out Innoslate® at www.innoslate.com.

Move Past Spreadsheets with Modern Requirements Management

Are you still using Microsoft Office to capture, manage, analyze, and trace all your requirements? Products and systems increase in complexity every day. You need a requirements management tool that can properly handle large complex projects.

When you use spreadsheets for requirements management you increase your time to market Even worse not using a modern requirements management solution can result in a higher risk of product failure.  A CIO report found that “as many as 71 percent of software projects that fail do so because of poor requirements management.”  Poor requirements management occurs when teams use antiquated RM tools that do not have the needed traceability, collaboration, and quality analysis features.

Traceability needs to happen through the entire process. It’s much simpler to get full project traceability if you can map your process in the same place you create your requirements. That’s why more and more companies are looking at robust solutions for their requirements management. Solutions like Innoslate, that have built in collaboration features, traceability, test processes, system processes, and more. Innoslate has the benefit of being a full lifecycle tool. You can start with requirements management, develop a process for the product and system, and then verify and validate that the process meets the requirements.

Modern requirements tools should be able to  trace between requirements and other classes and get reports such as the RTM, RSM, and RVM. In Innoslate, you can actually use the Test Center feature as well, and you can even trace the requirements to your verification actions (Test Case) and create a complete RVTM.

Another major problem with using spreadsheets is that teams can barely communicate with each other. It becomes difficult to keep files updated. Files are often shared between people using the same tools, but cross sharing isn’t really possible. Large teams with large complex requirements need to be able to communicate effectively. Cloud RM tools provide the ability to collaborate and keep information accurate. You can also look for on-premise solutions that offer your team collaboration, but still meet your security needs. Innoslate offers the ability to work collaboratively throughout the entire project. With Innoslate you can communicate quickly via chat and comments and keep a record of your conversation. Version control allows team members to work together on the same requirements document saving you time and reducing errors.

Of course, with all these collaboration features you need strong program management controls. A program manager can see every change made to a requirement and the team member that made the change. He or she can then revert back to older versions. Baselining allows you to see changes throughout the entire history of the document. With permissions, you can determine which team members can have owner, read/write, or read/only privileges to your project. Branching and forking provides even stricter controls, allowing the program manager to split off certain sections of a project to different groups. From there the program manager can decide which changes to accept back into the main project.

Spreadsheets were not specifically designed for capturing, managing, and analyzing requirements. Microsoft Office’s spell check was built to help maintain proper grammar and spelling. However, Innoslate has a quality analysis feature that can look for mistakes specific to requirements. Writing multiple requirements into one can make verification impossible or writing requirements that aren’t specific enough. These mistakes are costly and can result in poor requirements management. Innoslate can improve your entire requirements document by finding these mistakes for you.

It’s important to find a modern solution that can allow you to move past spreadsheets with traceability, collaboration, and quality analysis.

Watch “Move Past Spreadsheets with Modern Requirements Management” webinar.

10 Most Important Requirements Capture and Management Rules

Requirement documenting plays an important role in systems engineering. Writing high quality requirements can not only save millions of dollars, but lives. No matter how experienced you are it’s important to remind yourself of requirement writing rules and techniques.

  1.  Know Your Stakeholders

    The first and most important commandment of writing requirements is to know your stakeholders. Understand what common knowledge they have. Make sure you are all on the same page. Understand what each group of stakeholder’s priorities is and their objectives. You do not want each group to develop their own priorities and objectives separately. Separate priorities and objective result in a time consuming and expensive review process with lots of conflicts. Collaborative software that allows for continuous reviewing will help you keep up with all the stakeholders needs. You never want to give them a completely finished product and then ask for review (although that is common practice).

  2. Remember the CONOPs

    Most of you will probably not forgo the Concept of Operations (CONOPS), since it is such a valuable artifact. The CONOPS will be something that all the stakeholders understand and collaborate on together.  In this step you basically create stories that will consider different scenarios and needs. From there you will have a better understanding of where to start with your requirements. The CONOPS will help you write quality requirements by finding all the assumptions. It will help evaluate the ‘what if’ scenarios, make testing easier, and formulate your needs into the requirements.

  3.  Understand What is Really Needed

    First of all, there is a huge difference between want and need. Will the system work without a particular requirement? If you answered yes, then you can probably omit that requirement. A common mistake systems engineers make is listing possible solutions to needs rather than the actual needs. If your need is an efficient way to communicate, don’t specify cell phones, since there are many other forms of communications that may be more feasible, less expensive, or effective.  List what is actually needed; don’t list possible solutions to the needs.

  4. Be Specific (Give actual numbers. Don’t leave room for assumptions.)

    Leaving room for assumptions is leaving room for error. If you are not careful with the language you choose you could end up making costly assumptions. Using words such as minimize, maximize, etc., and/or, more efficient, forces the stakeholders to assume. Don’t let the stakeholders assume how much you want to minimize.

    • etc. can mean so many things
    • and/or causes the reader to guess whether its ‘and’ or ‘or’
    • min. max. don’t just say minimize expansion, say minimize expansion to 300
    • don’t just say quick, say how quick
    • give actual numbers
  5. Do Not Be Too Specific

    The only mistake worse than not being specific enough is over specifying. You want to be specific, but not too specific. Carefully review your requirements before baselining. During this review delete any unnecessary specifics.

    Allow scope with your numbers. If a requirement is good enough at expanding 300% +/- 10%, then give that option. Have any numbers be based on the results of analyses, not just someone’s “engineering judgment.”

  6. Give Requirements Not Instructions

    Understand what is needed and create requirements from those needs. This is why Commandment #1 is so important. If you understand your stakeholders needs writing requirements and not instructions becomes an easier task. It might be tempting to just writing instructions, but that is not what requirements are for. Requirements should provide enough information to allow the builder to provide the most cost-effective solution to the problem.

  7.  Use the Words ‘Shall’, ‘Should’, and ‘Will’

    The industry’s standard word usage for a requirement is “shall”, a goal is “should”, and a statement is “will”. If you do not use these standard word choices you will confuse other stakeholders.

  8. Include a Rationale

    A rationale justifies the inclusion of a specific requirement. Attach a rationale to each requirement by explaining the need for the requirement. The rationale provides reviewers and implementers with additional information on the intent of the requirements, thus avoiding confusion down the line.

  9. Use Proper Grammar

    You will prevent a lot of costly mistakes due to confusion if you use proper grammar. For example, run on sentences will result in two requirements appearing to be one. One technique to improve grammar is to use bullet points first and then construct sentences out of them.

  10. Use a Standard

    Use a standard to ensure consistency. Three common standards are MIL-STD-490, IEEE, and ISO. You should choose one that is right for your industry.

    MIL-STD-490: The United States Military Standard establishes the format and content for the United States Department of Defense’s objectives. It can be useful in other areas as well.

    IEEE: The Institute of Electrical and Electronics Engineers Standards Association develops the IEEE standards. Unlike the MIL-STDs, the IEEE reaches a broad range of industries, including transportation, healthcare, information technology, power, energy, and much more.

    ISO: The International Organization for Standardization develops standards for business to optimize productivity and minimize costs.

Why We Built Innoslate – About Us

As systems engineers who had been using modeling and requirements tools for decades, we kept running into the same problem: we needed a solution that spans the entire system lifecycle. Effective requirements analysis and management requires not only capturing and managing the requirements provided by the customer, but also the analysis and decomposition of those requirements into specifications for the buying and building of the system components.

Requirement Analysis includes modeling the processes and procedures related to the requirements and simulating those processes with realistic constraints in resources, bandwidth, latencies, and many other factors. And once we have a baselined set of requirements, we have to not only “maintain” them (as they constantly change over the lifecycle), but we also have to verify that the resulting components and systems meet the requirements at every level of composition (component, subsystem, system, system of systems, etc.).

We realized what most systems engineers needed was a requirements tool, a modeling tool, a simulation tool, a verification tool, a risk tool, a program management tool, and a document management tool. None of the existing software solutions had native integration for all these different tools. The International Council on Systems Engineering (INCOSE) provides a tools interoperability working group, which includes representatives from all the major tool vendors. Unfortunately, without native integration, users experience missing data points and a drain on time and resources to integrate. It also became unreasonably expensive to purchase so many different tools and to manually integrate and manage each one.  

SPEC Innovations built Innoslate to be the all-in-one solution for systems engineer and program managers. We also wanted engineers to start analyzing overall quality of the entire project early in the lifecycle. That’s why Innoslate provides analytical capabilities like Innoslate’s Requirements Quality Checker and Intelligence View. These two internal features identify problems with the requirements and the overall model, quickly, so they can be fixed early in the process, thus saving time and money. Not only that, you get the scalability and collaboration features you need to work not only small projects, but also very large ones.

PLM Moving to the Cloud

Why Product Lifecycle Management Is Moving to the Cloud

“The cloud” means many things to many people. It’s a common misconception that the cloud is the Internet itself. They think that all the information they put on the cloud can be easily “hacked,” so they see this as a very public thing. But for those who work in cloud computing, they see it as a means to deliver safe, secure services to more people, at a lower cost. You can share computer resources, including CPU power, memory, and storage. This sharing or “on-demand” use of computer resources means that you can pay less for those resource, than when you have provisioned them on your own.

To take advantage of this resource sharing you must use applications that take this new environment into account. Just using a client-server or desktop tool with a “web front end” does not work well. The application programmers must re-architect their code to take advantage of this new capability and at the same time deal with the problems, such as latency, since now you must pass data between the servers where the data is stored and the web browser on the client machine you are using. Those servers may be down the hall or a few miles away, so there can be substantial delays in data transmission.

Scalability

Most desktop/client-server tools assume very little latency, so they grab a lot of information at a time and put it into local memory. That’s fine when you are close to the data, but in cloud computing the servers could be anywhere in the world or at least across the continent. So, when people try to use a desktop tool in this new environment, they begin to breakdown quickly in terms of response time. Another way to say this is that these tools do not scale to meet the growing needs. But the whole idea of cloud computing is to allow the application to scale to meet the needs.

Collaboration

Cloud computing also enables world-wide collaboration. So now the need to scale becomes critical, as more and more people are working together and capturing/generating more and more information. A “web-based” tool must be designed to process more information locally, including visualization of the data. Otherwise, we are back to central computing, where you had a dumb terminal connected to a computer often far away. I can still remember how slow the response was when that occurred. Even though the “bandwidth” has grown to Gigabits per second, we are trying to move Terabytes of information.

PLM on the Cloud

So, what does all this have to do with Product Lifecycle Management (PLM)? PLM today requires a large amount of data, analytical tools to transform data into information, and personnel who collaborate to create the products. Clearly, PLM would benefit the most from this new cloud computing environment. But where are the cloud computing products for this market? Legacy tool makers are reticent to re-architect 100s of thousands of lines of code. Such an effort would take years and be very expensive only to compete with themselves during the transition. So, most have created some “web front-end” to provide limited access to the information that exists in the client-server or (worst case) desktop product.

Innoslate® is the rare exception in the PLM marketplace. Innoslate was designed from scratch as a cloud computing tool. The database backend persists the data, while the web front end visualizes and performs the necessary analyses, including complex discrete event and Monte Carlo simulations. Innoslate support all areas of PLM, from Systems Engineering, to Program Management, to Product Design, to Process Management, to Data Management, and more. All this in one simple, collaborative, scalable, and easy to use tool. Check out www.innoslate.com for details.

Overview of DoDAF with Innoslate Webinar

It’s that time again. Dr. Steve Dam will be hosting one of our most popular webinars, “Overview of DoDAF with Innoslate.” Make sure to register for our latest webinar on DoDAF 2.0, on Thursday, August 24th at 2:30 pm EST.

Register here

Your webinar host, Dr. Steve Dam will provide you with an in-depth overview of the DoDAF 2.0 using the systems engineering software, Innoslate. Dr. Dam, the President and Founder of SPEC Innovations, participated in the development of DoDAF. He recently published “DoDAF 2.02 – A Guide to Applying System Engineering to Develop Integrated, Executable Architectures.” The presenter will provide a live tool demonstration of Innoslate with a questions and answer session to follow. 

What will be covered?

  • Clear understanding of the DoDAF
  • Knowledge of what will make a good methodology
  • Applicable use of the DoDAF Dashboard in Innoslate
  • Overview of DM2 Concepts
  • Export to the Physical Exchange Specification

When? Thursday, August 24th at 2:30 pm EST

Where? https://register.gotowebinar.com/register/4963997416370276098

What is Model-Based Systems Engineering?

System engineering is the discipline of engineering that endeavors to perfect systems. As such, systems engineering is a kind of meta-engineering that can be applied across all complex team-based disciplines. The idea of systems engineering is to enhance the performance of human systems. It has more to do with the engineering of team performance, engineering meetings or political agreements. It is not quite a social science, but it is the science of perfecting human outcomes.

The International Council on Systems Engineering (INCOSE) has been the organizing body for systems engineering programs. The field of systems engineering has been emerging as a discipline with its own unique training and advanced degree programs. As of 2009, there are some 80 United States institutions offering undergraduate and graduate programs in systems engineering.

Systems-centric programs treat systems engineering as a separate discipline with a specific focus on a separately developed body of the theory and practice. Domain-centric programs are imbedded within conventional engineering fields. All programs are designed to develop the capability of managing and overseeing large scale engineering projects.

The field of systems engineering has developed its own unique set of tools and methodologies that have less to do with the physics and mathematics of the hardware of the project and more to do with the process of bringing the elements of the project together. Modeling software, refers to modeling the process of creation, not so much to models of what is being created. These tools enable members of project team to better collaborate and plan the process of creating a complex finished product, such as a space station or a skyscraper.

Innoslate software has developed as an all inclusive baseline tool for modeling and managing the system of engineering projects. It includes full collaboration systems that allow all team members to work from the same information base, to contribute new information and to have access to what has been accumulated. It has a rich vocabulary of diagramming and flow-charting media to illustrate, change, and embellish the engineering process over time. It has modalities to provide clear feedback about the growth of the project, where it has achieved goals and where it lags behind.

Reposted from SPEC Innovation with permission.

 

How to Keep MBSE from Becoming Just a Buzzword (or Is It Too Late?)

The term “Model-Based Systems Engineering” or “MBSE” has been around for nearly a decade. We see the term in requests for proposals, marketing materials, social media, conferences and many other places in the systems engineering community and even in the general public. Clearly, MBSE has become an important part of systems engineering, but has it also become the definition of a buzzword? First, take a look at the definition of a buzzword.

buzz·word

[buhz-wurd]

NOUN

  1. a word or phrase, often sounding authoritative or technical, that is a vogue term in a particular profession, field of study, popular culture, etc.

Source: Dictionary.com

So, it definitely sounds authoritative, as it comes from the “International Council on Systems Engineering” (INCOSE). It sounds technical, using “Model-Based” and “Systems Engineering.” And clearly, it’s “in vogue,” from its appearance everywhere.

 

What the definition of a buzzword doesn’t seem to provide is the way a buzzword has a negative context or as Dilbert put it:

       

 

What this means is that a buzzword is used by people who don’t really know what it means. I’m sure we have all heard many people use it without any idea of what it means. So, what does MBSE really mean?

 

Well to understand its real meaning, we need to review the definition of MBSE from INCOSE:

 “Model-based systems engineering (MBSE) is the formalized application of modeling to support system requirements, design, analysis, verification and validation, beginning in the conceptual design phase and continuing throughout development and later life cycle phases.” – INCOSE

 

As systems engineers, the first thing we want to do is decompose this rather long sentence. It can be broken down into two parts:

  • Modeling (formalized application); and
  • Lifecycle (system requirements, design, analysis, verification and validation).

 

The formalized application of modeling means that we create models of the system using a “standard.” We know that there are a number of formal and informal standards, which are applied in many different ways. The standard most are familiar with is SysML since it is a profile of UML. SysML focuses on communicating with the software community primarily. The Lifecycle Modeling Language (LML) open standard (www.lifecyclemodeling.org), covers the second part of the definition better, as its name implies. It also addresses the program management aspects of systems engineering (risk, cost, schedule, etc.), none of which is really addressed by SysML.

 

But we have been creating drawings, which are a type of model, since well before anyone called the discipline systems engineering. So, what makes this term different from classic systems engineering?

 

The key difference is the type of modeling we use when we talk about MBSE. We mean the development of “computable models.” Computable models are models based on data (usually in a standard ontology, like the one LML provides) that can be visualized in standard ways (again using any drawing standard, which both SysML and LML provide). These models can also be tested to determine their validity and to make sure we don’t introduce errors in logic or problems related to dynamic constraints (i.e., lack of resources, bandwidth, latencies, etc.). This testing also includes checking the models against general rules of quality, such as “all function names should start with a verb.” The tools for this kind of testing today include simulation (e.g., discrete event, Monte Carlo) and natural language processing (NLP).

 

Having models that can be tested and testing them is a clear way to make MBSE real and not a buzzword. Therefore, to implement MBSE you need a tool or set of tools to conduct this testing.

 

When considering a “MBSE” tool, you will hear claims from almost all of the tool vendors that they are one. To distinguish between those who deliver on the promise of MBSE and those who are treating it as a buzzword, just ask the following questions:

 

  1. Are your diagrams essentially drawings or are they automatically generated from the data?
  2. If I make a change to one piece of data in the database is that automatically updated in all the other visualizations of that piece of data, including the diagrams?
  3. Can I execute the models using strong simulation techniques?
  4. Do those simulation techniques include discrete event and Monte Carlo?
  5. Do the simulations take into account resource, latency, and bandwidth constraints?
  6. Does your tool test the entire model against common standards of good practice (heuristics)?
  7. Does your tool support the entire lifecycle (system requirements, design, analysis, verification and validation) in a seamless, integrated fashion?

 

If you ask all these questions, you will find a limited set of tools that can even come close to keeping MBSE from just being a buzzword. So, it’s essential that you carefully evaluate these tools to make sure they provide the support you need to become more productive and produce higher quality products. To see a tool that does meet all these needs check out www.innoslate.com.

Quick Guide to Innoslate’s Ontology

Innoslate uses the Lifecycle Modeling Language (LML) ontology as the basis for the tool’s database schema. For those new to the word “ontology,” it’s simply the set of classes and relationships between them that form the basis for capturing the information needed. We look at this in a simple Entity-Relationship-Attribute (ERA) form. This formulation has a simple parallel to the way we look at most languages: entities represent nouns; relationships represent verbs; attributes on the entity represent adjectives; and attributes on relationships represent adverbs.

LML contains twelve (12) entity classes and eight (8) subclasses. They represent the basic elements of information needed to describe almost any system. The figure below shows how they can be grouped to create the models needed for this description.

Most of these entity classes have various ways to visualize the information, which are commonly called models or diagrams. The benefit of producing the visualizations using this ontology means that when you create one model, other models that use the same information will automatically have that information available.

All these entities are linked to one another through the relationships. The primary relationships are shown below.

 

This language takes a little getting used to, like any other language. For example, you might be used to referring to something functional as a Function or Activity. These are both “types” of Actions in LML and implemented as labels in Innoslate. Similarly, you may be used to using different relationship names for parents and children for different entity classes. However, by using the same verbs for the parent-child relationships you can avoid confusion in having to remember all the different verbs.

You still might need other ontological additions. LML was meant to be the “80% solution.” You should look very closely at the ontology, as often you only need to add types (labels) or an attribute here and there. Hopefully, you will rarely need to add new classes and relationships. If you do add new classes, try to do so as subclasses to existing ones, so that you inherit the diagrams as well. For example, when the Innoslate development team added the new Test Center, they decided they needed to extend the Action class. This enables the TestCase class to inherit the Action class and other functional diagrams, as well as the status, duration, and other attributes that were important.

Hopefully, you can see the benefits of using LML as the basis for Innoslate’s schema. It was designed to be:

  • Broad (covers the entire lifecycle – technical and programmatic)
  • Ontology-based (enables translation from LML to other languages and back)
  • All the capabilities of SysML (with LML v1.1 extensions) and DoDAF
  • Simple structure
  • Useful for stakeholders across the entire lifecycle

For more information, see www.lifeyclemodeling.org and visit the Help Center at help.innoslate.com.