It’s Time for Government to Embrace the Cloud Computing Revolution

We are sometimes our own worst enemies! We want something, but at the same time put up barriers to obtain what we want. A perfect example was at an Industry Day I recently attended. The customer had put out a request for information (RFI) and was holding a day to present what was going on with the program to the potential contractors. No procurement was discussed, only information about how they wanted to implement model-based systems engineering (MBSE). In particular they wanted to know what kind of contacting language should be used to provide better requests for proposals (RFP). However, they also said that we could not have one-on-one technical conversations with the government technical personnel. I call that a “self-inflicted denial of service attack.”

Cloud computing is the most common self-inflicted denial of service we encounter. We are all familiar now with DNS (Domain Name System) attacks. They seem to be a frequent occurrence and it’s frustrating when we can’t get on our favorite website because a troll has attacked it.

Because of these trolls and all their attack vectors, many in government have resisted adopting cloud computing. They think: “clouds are dangerous … I don’t have control over my data … someone might steal it.” All the while, their corporate networks have been hacked by every major player in the world. If someone hacks into your corporate network, everything they get is related to your organization and what it does. In other words, everything they get is gold. But isn’t cloud computing, as provided by large providers like Amazon, Google, and Microsoft, more secure than your corporate networks?

Let’s take Google for example. First, they don’t tell anyone the location of their data centers. They provide complete physical security. They build all their own servers from scratch and destroy them when they have finished their useful life. They have all the firewalls and software detection capabilities needed and more. They encrypt the data at rest (and you should be sending encrypted data via HTTP, at least). They randomize the filenames, so you need a map to find anything. The meet and exceed the FedRAMP requirements.

Does your corporate (or government) network do all that? Probably not. An Amazon Web Services’ representative explained to me, “FedRAMP requires over 200 security controls, we have over 2,000 of them.” The last thing anyone from these major “public” cloud providers want is some hacker successfully penetrating their network and capturing critical user data. They could (and would) be sued.

I was talking to a gentleman from the government about cloud computing the other day and he told me, “No one has ever told me how they can clean up a spill on the cloud.” [For those not in the know, a “spill” is when you accidentally put information somewhere it doesn’t belong.] I did not have the presence of mind at the time, but I should have asked “what do you do now with your enterprise e-mail system?” I can guarantee they do not go around tracking down backup and destroying hard drives. Deleting the data results in it being written over hundreds of times in a matter of minutes.

So, it’s time to stop committing denial of service attacks on ourselves. It’s time to embrace the cloud computing revolution and get on-board. The commercial world already did this for the most part half a decade ago. If we want to speed up and improve government, they need to figure out how to use the cloud now.

How to Choose the Right MBSE Tool

Find the Model-Based Systems Engineering Tool for Your Team

A model-based systems engineering tool can provide you with accuracy and efficiency. You need a tool that can help you do your job faster, better, and cheaper. Whether you are using legacy tools like Microsoft Office or are looking for a MBSE tool that better fits your team, here are some features and capabilities you should consider.

Collaboration and Version Control

It’s 2018. The MBSE tool you are looking at should definitely have built in collaboration and version control. You want to be able to communicate quickly and effectively with your team members and customers. Simple features such as a chat system and comment fields are a great start. Workflow and version control are more complex features but very effective. Workflow is a great feature for a program manager. It allows the PM to design a process workflow for the team that sends out reminders and approvals. Version control lets users work together simultaneously on the same document, diagram, etc. If you are working in a team of 2+ people, you need a tool with version control. Otherwise you will waste a lot of time waiting for a team member to finish the document or diagram before you can work on it.

Built in Modeling Languages Such as LML, SysML, BPML, Etc.

Most systems engineers need to be able to create uniformed models. LML encompasses the necessary aspects of both SysML and BPML. If you would like to try a simpler modeling language for complex systems, LML is a great way to do that. A built in modeling language allows you to make your models correct and understandable to all stakeholders.

Executable Models

A MBSE tool needs to be much more than just a drag and drop drawing tool; the models need to be executable. Executable models ensure accurate processes through simulation. Innoslate’s activity diagram and action diagram are both executable through the discrete event and Monte Carlo simulators. With the discrete event simulator, you will not only be able to see your process models execute, but you will able to see the total time, costs, resources used, and slack. The Monte Carlo simulator will show you the standard deviation of your model’s time, cost, and resources.

Easy to Learn

It can take a lot of time and money to learn a new MBSE tool. You want a relatively short learning curve. First, look for a tool that has an easy user interface. A free trial, sandbox, or account to get started with is a major plus. This let’s you get a good feel for how easy the tool is to learn. Look for tools that provide free online training. It’s important that the tool provider is dedicated to educating their users. They should have documentation, webinars, and free or included support.

Communicates Across Stakeholders

Communication in the system/product lifecycle is imperative. Most of us work on very diverse teams. Some of us have backgrounds in electrical engineering or physics or maybe even business. You need to be able to communicate across the entire lifecycle. This means the tool should have classes that meet the needs of many different backgrounds, such as risk, cost, decisions, assets, etc. A tool that systems engineers, program managers, and customers can all understand is ideal. The Lifecycle Modeling Language (LML) is a modeling language designed to meet all the stakeholder needs.

Full Lifecycle Capability

A tool with full lifecycle capability will save you money and time. If you don’t choose a tool with all the features needed for the project’s lifecycle, you will have to purchase several different tools. Each of those tools can cost the same amount as purchasing just one full lifecycle MBSE tool. You will also have to spend money on more training since you will not be able to do large group training. Most tools do not work together, so you will have spend resources on integrating the different tools. This causes the overall project to cost a lot more. This is why Innoslate is a full lifecycle MBSE solution.


It’s important to find the tool that is right for your project and your team. These are just helpful guidelines to help you find the right tool for you. You might need to adjust some of these guidelines for your specific project. If you would like to see if Innoslate is the right tool for your project, get started with it today or call us to see if our solution is the good fit for you.


Why Do We Need Model-Based Systems Engineering?

MBSE is one of the latest buzzwords to hit the development community.

The main idea was to transform the systems engineering approach from “document-centric” to “model-centric.” Hence, the systems engineer would develop models of the system instead of documents.

But why? What does that buy us? Switching to a model-based approach helps: 1) coordinate system design activities; 2) satisfy stakeholder requirements; and 3) provide a significant return on investment.

Coordinating System Design Activities

The job of a systems engineer is in part to lead the system design and development by working with the various design disciplines to optimize the design in terms of cost, schedule, and performance. The problem with letting each discipline design the system without coordination is shown in the comic.

If each discipline optimized for their area of expertise, then the airplane (in this case) would never get off the ground. The systems engineer works with each discipline and balances the needs in each area.

MBSE can help this coordination by providing a way to capture all the information from the different disciplines and share that information with the designers and other stakeholders. Modern MBSE tools, like Innoslate, provide the means for this sharing, as long as the tool is easy for everyone to use. A good MBSE tool will have an open ontology, such as the Lifecycle Modeling Language (LML); many ways to visualize the information in different interactive diagrams (models); ability to verify the logic and modeling rules are being met; and traceability between all the information from all sources.

Satisfying Stakeholder Requirements

Another part of the systems engineers’ job is to work with the customers and end-users who are paying for the product. They have “operational requirements” that must be satisfied so that they can meet their business needs. Otherwise they will no longer have a business.

We use MBSE tools to help us analyze those requirements and manage them to ensure they are met at the end of the product development. As such, the systems engineer becomes the translator from the electrical engineers to the mechanical engineers to the computer scientists to the operator of the system to the maintainer of the system to the buyer of the system. Each speaks a different language. The idea of using models was a means to provide this communications in a simple, graphical form.

We need to recognize that many of the types of systems engineering diagrams (models) do not communicate to everyone, particularly the stakeholders. That’s why documents contain both words and pictures. They communicate not only the visual but explain the visual image to those who do not understand it. We need an ontology and a few diagrams that seem familiar to almost anyone. So, we need something that can model the system and communicate well with everyone.

Perhaps the most important thing about this combined functional and physical model is it can be tested to ensure that it works. Using discrete event simulation, this model can be executed to create timelines, identify resource usage, and cost. In other words, it allows us to optimize cost, schedule, and performance of the system through the model. Finally, we have something that helps us do our primary job. Now that’s model-based systems engineering!

Provides a Significant Return on Investment

We can understand the idea of how systems engineering provides a return on investment from the graph.

The picture shows what happens when we do not spend enough time and money on systems engineering. The result is often cost overruns, schedule slips, reduced performance, and program cancellations. Something not shown on the graph, since it is NASA-related data for unmanned satellites, is the potential loss of life due to poor systems engineering.

MBSE tools help automate the systems engineering process by providing a mechanism to not only capture the necessary information more completely and traceably, but also verify that the models work. If those tools contain simulators to execute the models and from that execution provide a means to optimize cost, schedule, and performance, then fewer errors will be introduced in the early, requirements development phase. Eliminating those errors will prevent the cost overruns and problems that might not be surfaced by traditional document-centric approaches.

Another cost reduction comes from conducting model-based reviews (MBRs). An MBR uses the information within the tool to show reviewers what they need to ensure that the review evaluation criteria are met. The MBSE tool can provide a roadmap for the review using internal document views and links and provide commenting capabilities so that the reviewers’ questions can be posted. The developers can then use the tool to answer those comments directly. By not having to print copies of the documentation for everyone for the review, and then consolidate the markups into a document for adjudication, we cut out several time-consuming steps, which reduce the labor cost of the review an order of magnitude. This MBR approach can reduce the time to review and respond to the review from weeks to days.


The purpose for “model-based” systems engineering was to move away from being “document-centric.” MBSE is much more than just a buzzword. It’s an important application that allows us to develop, analyze, and test complex systems. We most importantly need MBSE because it provides a means to coordinate system design activity, satisfies stakeholder requirements and provides a significant return on investment.  The “model-based” technique is only as good the MBSE tool you use, so make sure to choose a good one.

How to Use Innoslate to Perform Failure Modes and Effects Criticality Analysis

“Failure Mode and Effects Analysis (FMEA) and Failure Modes, Effects and Criticality Analysis (FMECA) are methodologies designed to identify potential failure modes for a product or process, to assess the risk associated with those failure modes, to rank the issues in terms of importance and to identify and carry out corrective actions to address the most serious concerns.”[1]

FMECA is a critical analysis required for ensuring viability of a system during operations and support phase of the lifecycle. A major part of FMECA is understanding the failure process and its impact on the operations of the system. The figure below shows an example of how to model a process to include the potential of failure. Duration attributes, Input/Output, Cost and Resource entities can be added to this model and simulated to begin estimating metrics. You can use this with real data to understand the values of existing systems or derive the needs of the system (thresholds and objectives) by including this kind of analysis in the overall system modeling.

action diagram fmea

Step one is to build this Action Diagram (for details on how to do this please reference the Guide to Model-Based Systems Engineering. Add a loop to periodically enable the decision on whether or not a failure occurs. The time between these decisions can be adjusted by the number of iteration of the loop and the duration of the “F.11 Continue Normal Operations” action.

Adjust the number of iterations by selecting the loop action (“F.1 Continue to operate vehicle?”) and press the </>Script button (see below). A dialog appears asking you to edit the action’s script. You can use the pull-down menu to select Loop Iterations, Custom Script, Probability (Loop), and Resource (Loop). In this case, select “Loop Iterations.” The type in the number (choose 100) as see in the figure below.

Next change the duration of this action and the F.11. Since the loop decision is not a factor in this model, you can give it a nominally small time (1 minute as shown). For the “F.11 Continue Normal Operations” choose 100 hours. When combined with the branch percentage of this path of 90%, means that we have roughly 900 operating hours between failures, which is not unusual for a vehicle in a suburban environment. We could provide a more accurate estimate, including using a distribution for the normal operating hours.

The 90% branch probability comes from the script for the OR action (“F.2 Failure?”). That selection results in the dialog box below.

Now if you assume a failure occurs approximately 10% of the time you can then determine the failure modes are probabilistic in nature, the paths need to be selected based on those probabilities. The second OR action (“F.3 Failure Mode?) shows three possible failure modes. You can add more by selecting F.3 and using the “+Add Branch” button. You can use this to add more branches to represent other failure modes, such as “Driver failure,” “Hit obstacle,” “Guidance System Loss,” etc.

Note to change the default names (Yes, No, Option) to the names of the failure modes, just double click on the name and a dialog will pop-up (as on right). Just type in the name you prefer.

To finish off this model add durations to the various other actions that may result from the individual failures. The collective times represent the impact of the failure on the driver’s time. Since you do not have any data at this time for how long each of these steps would take, just estimate them by using Triangular distributions of time (see sidebar below).

This shows an estimate from a minimum of ½ hour to a maximum of 1 hour, with the mean being ¾ hour. If you do this for the other actions, you can now execute the model to determine the impacts on time.

Note, you could also accumulate costs by adding a related Cost entity to each of the actions. Simply create an overall cost entity (e.g., “Failure Costs” and then decompose it by the various costs of the repairs. Then you can assign the costs to the actions by using a Hierarchical Comparison matrix. Select the parent process action (“F Vehicle Failure Process”) and use the Open menu to select the comparison matrix (at bottom of the menu). Then you will see a sidebar that asks for the “Target Entity,” which is the “Failure Costs” you just created. Then select the “Target Relationship,” which is only one “incurs” between costs and actions, then push the blue “Generate” button to obtain the matrix. Select the intersections of the between the process steps and the costs. This creates the relationships in between the actions and the costs. The result is shown below.

hiearchical comparison matrix

If you have not already added the values of the costs, you can do it from this matrix. Just select one of the cost entities and its attributes show up on the sidebar (see below).

Note how you can add distributions here as well.

Finally, you want to see the results of the model. Execute the model using the discrete event and Monte Carlo Simulators. To access these simulators, just select “Simulate” from the Action Diagram for the main process (“F Vehicle Failure Process). You can see the results of a single discrete event simulation below. Note that the gray boxes mean that those actions were never executed. They represent the rarer failure mode of an engine failure (assume that you change your oil regularly or this would occur much more often).

To see the impact of many executions by using the Monte Carlo simulator. The results of this simulation for 1000 runs is shown below.

As a result, you can see that for about a year in operation, the owner of this vehicle can expect to spend an average of over $1560. However, you could spend as much as over $3750 in a bad year!

For more detailed analysis, you can use the “CSV Reports” to obtain the details of these runs.

[1] From accessed 1/18/2017

Developing Requirements from Models – Why and How

One of the benefits of having an integrated solution for requirements management and model-based systems engineering is you can easily develop requirements from models. This is becoming an increasingly used practice in the systems engineering community. Often times as requirements managers we are given the task of updating or developing an entirely new product or system. A a great place to start in this situation is to create two models a current model and a future (proposed) model. This way you can predict where the problems are in the current systems and develop requirements from there. Innoslate has an easy way to automatically generate requirements documents from models. Below we’ll take a well known example from the aerospace industry, The FireSAT model,  to show you how you can do this.

The diagram below shows the top level of the wildfire detection and alerting system. Fires are detected and then alerts are sent. Each of these steps are then decomposed in more detail. The decomposition can be continued until most aspects of the problem and mechanisms for detection and alerting have been identified. If timing and resources are added, this model can predict where the problems are in the current system. This model can show you that most fires are detected too late to be put out before consuming large areas of forests and surrounding populated areas.

One system proposed to solve this is a dedicated satellite constellation (FireSAT ) that would detect wildfires early and alert crews for putting them out. The same system could also aid in monitoring on-going wildfires and aid in the fire suppression and property damage analysis. Such a system could even provide this service worldwide. The proposed system for the design reference mission is shown below.

The “Perform Normal Ops” is the only one decomposed, as that was the primary area of interest for this mission, which would be a short-term demonstration of the capability. Let’s decompose this step further.

Now we have a decomposition of the fire model, warning system, and response.. The fire model and response were included to provide information about the effectiveness of such a capability. The other step provides the functionality required to perform the primary element of alerting. This element is essentially the communications subsystem of the satellite system (which includes requirements for ground systems as well as space systems).

Innoslate allows you to quickly obtain a requirements document for that subsystem. The document, in the Requirements View looks like the picture below.

This model is just for a quick example, but you can see that it contains several functional requirements. This document, once the model is complete, can then provide the basis for the Communications Subsystem Requirements.


If you’d like to see another example of how to do this, watch our Generating Requirements video for the autonomous vehicle example.

10 Most Important Requirements Capture and Management Rules

Requirement documenting plays an important role in systems engineering. Writing high quality requirements can not only save millions of dollars, but lives. No matter how experienced you are it’s important to remind yourself of requirement writing rules and techniques.

  1.  Know Your Stakeholders

    The first and most important commandment of writing requirements is to know your stakeholders. Understand what common knowledge they have. Make sure you are all on the same page. Understand what each group of stakeholder’s priorities is and their objectives. You do not want each group to develop their own priorities and objectives separately. Separate priorities and objective result in a time consuming and expensive review process with lots of conflicts. Collaborative software that allows for continuous reviewing will help you keep up with all the stakeholders needs. You never want to give them a completely finished product and then ask for review (although that is common practice).

  2. Remember the CONOPs

    Most of you will probably not forgo the Concept of Operations (CONOPS), since it is such a valuable artifact. The CONOPS will be something that all the stakeholders understand and collaborate on together.  In this step you basically create stories that will consider different scenarios and needs. From there you will have a better understanding of where to start with your requirements. The CONOPS will help you write quality requirements by finding all the assumptions. It will help evaluate the ‘what if’ scenarios, make testing easier, and formulate your needs into the requirements.

  3.  Understand What is Really Needed

    First of all, there is a huge difference between want and need. Will the system work without a particular requirement? If you answered yes, then you can probably omit that requirement. A common mistake systems engineers make is listing possible solutions to needs rather than the actual needs. If your need is an efficient way to communicate, don’t specify cell phones, since there are many other forms of communications that may be more feasible, less expensive, or effective.  List what is actually needed; don’t list possible solutions to the needs.

  4. Be Specific (Give actual numbers. Don’t leave room for assumptions.)

    Leaving room for assumptions is leaving room for error. If you are not careful with the language you choose you could end up making costly assumptions. Using words such as minimize, maximize, etc., and/or, more efficient, forces the stakeholders to assume. Don’t let the stakeholders assume how much you want to minimize.

    • etc. can mean so many things
    • and/or causes the reader to guess whether its ‘and’ or ‘or’
    • min. max. don’t just say minimize expansion, say minimize expansion to 300
    • don’t just say quick, say how quick
    • give actual numbers
  5. Do Not Be Too Specific

    The only mistake worse than not being specific enough is over specifying. You want to be specific, but not too specific. Carefully review your requirements before baselining. During this review delete any unnecessary specifics.

    Allow scope with your numbers. If a requirement is good enough at expanding 300% +/- 10%, then give that option. Have any numbers be based on the results of analyses, not just someone’s “engineering judgment.”

  6. Give Requirements Not Instructions

    Understand what is needed and create requirements from those needs. This is why Commandment #1 is so important. If you understand your stakeholders needs writing requirements and not instructions becomes an easier task. It might be tempting to just writing instructions, but that is not what requirements are for. Requirements should provide enough information to allow the builder to provide the most cost-effective solution to the problem.

  7.  Use the Words ‘Shall’, ‘Should’, and ‘Will’

    The industry’s standard word usage for a requirement is “shall”, a goal is “should”, and a statement is “will”. If you do not use these standard word choices you will confuse other stakeholders.

  8. Include a Rationale

    A rationale justifies the inclusion of a specific requirement. Attach a rationale to each requirement by explaining the need for the requirement. The rationale provides reviewers and implementers with additional information on the intent of the requirements, thus avoiding confusion down the line.

  9. Use Proper Grammar

    You will prevent a lot of costly mistakes due to confusion if you use proper grammar. For example, run on sentences will result in two requirements appearing to be one. One technique to improve grammar is to use bullet points first and then construct sentences out of them.

  10. Use a Standard

    Use a standard to ensure consistency. Three common standards are MIL-STD-490, IEEE, and ISO. You should choose one that is right for your industry.

    MIL-STD-490: The United States Military Standard establishes the format and content for the United States Department of Defense’s objectives. It can be useful in other areas as well.

    IEEE: The Institute of Electrical and Electronics Engineers Standards Association develops the IEEE standards. Unlike the MIL-STDs, the IEEE reaches a broad range of industries, including transportation, healthcare, information technology, power, energy, and much more.

    ISO: The International Organization for Standardization develops standards for business to optimize productivity and minimize costs.

PLM Moving to the Cloud

Why Product Lifecycle Management Is Moving to the Cloud

“The cloud” means many things to many people. It’s a common misconception that the cloud is the Internet itself. They think that all the information they put on the cloud can be easily “hacked,” so they see this as a very public thing. But for those who work in cloud computing, they see it as a means to deliver safe, secure services to more people, at a lower cost. You can share computer resources, including CPU power, memory, and storage. This sharing or “on-demand” use of computer resources means that you can pay less for those resource, than when you have provisioned them on your own.

To take advantage of this resource sharing you must use applications that take this new environment into account. Just using a client-server or desktop tool with a “web front end” does not work well. The application programmers must re-architect their code to take advantage of this new capability and at the same time deal with the problems, such as latency, since now you must pass data between the servers where the data is stored and the web browser on the client machine you are using. Those servers may be down the hall or a few miles away, so there can be substantial delays in data transmission.


Most desktop/client-server tools assume very little latency, so they grab a lot of information at a time and put it into local memory. That’s fine when you are close to the data, but in cloud computing the servers could be anywhere in the world or at least across the continent. So, when people try to use a desktop tool in this new environment, they begin to breakdown quickly in terms of response time. Another way to say this is that these tools do not scale to meet the growing needs. But the whole idea of cloud computing is to allow the application to scale to meet the needs.


Cloud computing also enables world-wide collaboration. So now the need to scale becomes critical, as more and more people are working together and capturing/generating more and more information. A “web-based” tool must be designed to process more information locally, including visualization of the data. Otherwise, we are back to central computing, where you had a dumb terminal connected to a computer often far away. I can still remember how slow the response was when that occurred. Even though the “bandwidth” has grown to Gigabits per second, we are trying to move Terabytes of information.

PLM on the Cloud

So, what does all this have to do with Product Lifecycle Management (PLM)? PLM today requires a large amount of data, analytical tools to transform data into information, and personnel who collaborate to create the products. Clearly, PLM would benefit the most from this new cloud computing environment. But where are the cloud computing products for this market? Legacy tool makers are reticent to re-architect 100s of thousands of lines of code. Such an effort would take years and be very expensive only to compete with themselves during the transition. So, most have created some “web front-end” to provide limited access to the information that exists in the client-server or (worst case) desktop product.

Innoslate® is the rare exception in the PLM marketplace. Innoslate was designed from scratch as a cloud computing tool. The database backend persists the data, while the web front end visualizes and performs the necessary analyses, including complex discrete event and Monte Carlo simulations. Innoslate support all areas of PLM, from Systems Engineering, to Program Management, to Product Design, to Process Management, to Data Management, and more. All this in one simple, collaborative, scalable, and easy to use tool. Check out for details.

What is Model-Based Systems Engineering?

System engineering is the discipline of engineering that endeavors to perfect systems. As such, systems engineering is a kind of meta-engineering that can be applied across all complex team-based disciplines. The idea of systems engineering is to enhance the performance of human systems. It has more to do with the engineering of team performance, engineering meetings or political agreements. It is not quite a social science, but it is the science of perfecting human outcomes.

The International Council on Systems Engineering (INCOSE) has been the organizing body for systems engineering programs. The field of systems engineering has been emerging as a discipline with its own unique training and advanced degree programs. As of 2009, there are some 80 United States institutions offering undergraduate and graduate programs in systems engineering.

Systems-centric programs treat systems engineering as a separate discipline with a specific focus on a separately developed body of the theory and practice. Domain-centric programs are imbedded within conventional engineering fields. All programs are designed to develop the capability of managing and overseeing large scale engineering projects.

The field of systems engineering has developed its own unique set of tools and methodologies that have less to do with the physics and mathematics of the hardware of the project and more to do with the process of bringing the elements of the project together. Modeling software, refers to modeling the process of creation, not so much to models of what is being created. These tools enable members of project team to better collaborate and plan the process of creating a complex finished product, such as a space station or a skyscraper.

Innoslate software has developed as an all inclusive baseline tool for modeling and managing the system of engineering projects. It includes full collaboration systems that allow all team members to work from the same information base, to contribute new information and to have access to what has been accumulated. It has a rich vocabulary of diagramming and flow-charting media to illustrate, change, and embellish the engineering process over time. It has modalities to provide clear feedback about the growth of the project, where it has achieved goals and where it lags behind.

Reposted from SPEC Innovation with permission.


How to Keep MBSE from Becoming Just a Buzzword (or Is It Too Late?)

The term “Model-Based Systems Engineering” or “MBSE” has been around for nearly a decade. We see the term in requests for proposals, marketing materials, social media, conferences and many other places in the systems engineering community and even in the general public. Clearly, MBSE has become an important part of systems engineering, but has it also become the definition of a buzzword? First, take a look at the definition of a buzzword.




  1. a word or phrase, often sounding authoritative or technical, that is a vogue term in a particular profession, field of study, popular culture, etc.


So, it definitely sounds authoritative, as it comes from the “International Council on Systems Engineering” (INCOSE). It sounds technical, using “Model-Based” and “Systems Engineering.” And clearly, it’s “in vogue,” from its appearance everywhere.


What the definition of a buzzword doesn’t seem to provide is the way a buzzword has a negative context or as Dilbert put it:



What this means is that a buzzword is used by people who don’t really know what it means. I’m sure we have all heard many people use it without any idea of what it means. So, what does MBSE really mean?


Well to understand its real meaning, we need to review the definition of MBSE from INCOSE:

 “Model-based systems engineering (MBSE) is the formalized application of modeling to support system requirements, design, analysis, verification and validation, beginning in the conceptual design phase and continuing throughout development and later life cycle phases.” – INCOSE


As systems engineers, the first thing we want to do is decompose this rather long sentence. It can be broken down into two parts:

  • Modeling (formalized application); and
  • Lifecycle (system requirements, design, analysis, verification and validation).


The formalized application of modeling means that we create models of the system using a “standard.” We know that there are a number of formal and informal standards, which are applied in many different ways. The standard most are familiar with is SysML since it is a profile of UML. SysML focuses on communicating with the software community primarily. The Lifecycle Modeling Language (LML) open standard (, covers the second part of the definition better, as its name implies. It also addresses the program management aspects of systems engineering (risk, cost, schedule, etc.), none of which is really addressed by SysML.


But we have been creating drawings, which are a type of model, since well before anyone called the discipline systems engineering. So, what makes this term different from classic systems engineering?


The key difference is the type of modeling we use when we talk about MBSE. We mean the development of “computable models.” Computable models are models based on data (usually in a standard ontology, like the one LML provides) that can be visualized in standard ways (again using any drawing standard, which both SysML and LML provide). These models can also be tested to determine their validity and to make sure we don’t introduce errors in logic or problems related to dynamic constraints (i.e., lack of resources, bandwidth, latencies, etc.). This testing also includes checking the models against general rules of quality, such as “all function names should start with a verb.” The tools for this kind of testing today include simulation (e.g., discrete event, Monte Carlo) and natural language processing (NLP).


Having models that can be tested and testing them is a clear way to make MBSE real and not a buzzword. Therefore, to implement MBSE you need a tool or set of tools to conduct this testing.


When considering a “MBSE” tool, you will hear claims from almost all of the tool vendors that they are one. To distinguish between those who deliver on the promise of MBSE and those who are treating it as a buzzword, just ask the following questions:


  1. Are your diagrams essentially drawings or are they automatically generated from the data?
  2. If I make a change to one piece of data in the database is that automatically updated in all the other visualizations of that piece of data, including the diagrams?
  3. Can I execute the models using strong simulation techniques?
  4. Do those simulation techniques include discrete event and Monte Carlo?
  5. Do the simulations take into account resource, latency, and bandwidth constraints?
  6. Does your tool test the entire model against common standards of good practice (heuristics)?
  7. Does your tool support the entire lifecycle (system requirements, design, analysis, verification and validation) in a seamless, integrated fashion?


If you ask all these questions, you will find a limited set of tools that can even come close to keeping MBSE from just being a buzzword. So, it’s essential that you carefully evaluate these tools to make sure they provide the support you need to become more productive and produce higher quality products. To see a tool that does meet all these needs check out

Quick Guide to Innoslate’s Ontology

Innoslate uses the Lifecycle Modeling Language (LML) ontology as the basis for the tool’s database schema. For those new to the word “ontology,” it’s simply the set of classes and relationships between them that form the basis for capturing the information needed. We look at this in a simple Entity-Relationship-Attribute (ERA) form. This formulation has a simple parallel to the way we look at most languages: entities represent nouns; relationships represent verbs; attributes on the entity represent adjectives; and attributes on relationships represent adverbs.

LML contains twelve (12) entity classes and eight (8) subclasses. They represent the basic elements of information needed to describe almost any system. The figure below shows how they can be grouped to create the models needed for this description.

Most of these entity classes have various ways to visualize the information, which are commonly called models or diagrams. The benefit of producing the visualizations using this ontology means that when you create one model, other models that use the same information will automatically have that information available.

All these entities are linked to one another through the relationships. The primary relationships are shown below.


This language takes a little getting used to, like any other language. For example, you might be used to referring to something functional as a Function or Activity. These are both “types” of Actions in LML and implemented as labels in Innoslate. Similarly, you may be used to using different relationship names for parents and children for different entity classes. However, by using the same verbs for the parent-child relationships you can avoid confusion in having to remember all the different verbs.

You still might need other ontological additions. LML was meant to be the “80% solution.” You should look very closely at the ontology, as often you only need to add types (labels) or an attribute here and there. Hopefully, you will rarely need to add new classes and relationships. If you do add new classes, try to do so as subclasses to existing ones, so that you inherit the diagrams as well. For example, when the Innoslate development team added the new Test Center, they decided they needed to extend the Action class. This enables the TestCase class to inherit the Action class and other functional diagrams, as well as the status, duration, and other attributes that were important.

Hopefully, you can see the benefits of using LML as the basis for Innoslate’s schema. It was designed to be:

  • Broad (covers the entire lifecycle – technical and programmatic)
  • Ontology-based (enables translation from LML to other languages and back)
  • All the capabilities of SysML (with LML v1.1 extensions) and DoDAF
  • Simple structure
  • Useful for stakeholders across the entire lifecycle

For more information, see and visit the Help Center at