Is There a Return on Investment from Model-Based Systems Engineering (MBSE)? Part 1

Join us for a live webinar on this topic, “Is There a ROI From MBSE” on Thursday, October 17th at 11:00 am ET. Register Here.

This question is one that many people ask. In fact, the International Council on Systems Engineering (INCOSE) has that as one of its tasks for their Value Proposition Initiative. A group of systems engineers is trying to find evidence to prove that MBSE has value. However, that becomes very difficult for a concept that has only been around for a dozen years, when the lifecycle of many of the systems of interest are measured in several decades.

We can approach this question by inference. If there is a significant return on investment in systems engineering, then we can infer that there might be one for MBSE. Fortunately, we do have many decades of experience applying systems engineering to project since the 1950s (at least, depending on how you define systems engineering). One of the best analyses I have come across over the years was a very interesting piece of work by Werner M. Gruhl who was at the time the Chief of the Cost & Economics Analysis Branch at NASA. His work was published in a NASA technical paper entitled, “Issues in NASA Program and Project Management” (NASA SP-6101 (08), in 1994. In a paper by Ivy Hooks in this publication, she states that “if the program requirements are not well understood, there is not much hope for estimating the cost of a program.” She continues: “Werner Gruhl developed a history of NASA programs versus cost overruns” and cited the diagram below (redrawn to due to the poor quality of the document found – an old scanned PDF). She interpreted this chart as “if you have not done a good job in Phase A and B in defining and confining your program, you are going to encounter large numbers of changing requirements and the cost will go up accordingly.” Note the figure below indicates % Spent on Systems Engineering, which it is really Phase A and B in NASA terminology.

Thus, it’s clear that at least the combination of program management and systems engineering, which is what allows you to properly develop the set of requirements for the program, is required to keep the cost of the project from sky rocketing. Note that program management and systems engineering are flip sides of the same coin. The program manager optimizes cost, schedule, and performance, while mitigating risk in each of these areas. The systems engineer is tasked to do the same for the system. That’s why these two disciplines have been seen to overlap, which was recognized in a recent book by INCOSE and the Program Management Institute (PMI).

Another more recent attempt at qualifying the value of systems engineering came from the Software Engineering Institute. They conducted several studies to determine the value of systems engineering, including one documented in their November 2012 paper entitled “The Business Case for Systems Engineering Study: Results of the Systems Engineering Effectiveness Survey.” The authors “found clear and significant relationships between the application of SE best practices to projects and the performance of those project” as seen in the figure below.

Project Performance vs. Total SE Capability

In a later presentation by Mr. Joseph P. Elm, one of the authors of the 2012 paper, on “Quantifying the Effectiveness of Systems Engineering,” he cites a finding for a General Accountability Office (GAO) report (GAO-09-362T) that states:

“… managers rely heavily on assumptions about system requirements, technology, and design maturity, which are consistently too optimistic. These gaps are largely the result of a lack of a disciplined systems engineering analysis prior to beginning system development …”

So, it is recognized that there is great value in performing at least the “right amount” of systems engineering. If we use the Gruhl graph as a basis, we need to spend around 7-12% of the program’s budget on the combination of program management and systems engineering. Since the cost of the program could as much as double on average according to the chart, the return would be about 10 times the investment. For example, if we spent $100,000 on systems engineering and program management and the overall cost of the program was $1,000,000, then since the cost could have doubled to $2,000,000, we save $1,000,000.

So now that we agree there is a substantial return on investment in systems engineering, let’s get back to the question of ROI on MBSE. The question here is then, “does modeling help systems engineering.” Since we have always done modeling in systems engineering, I think that is clearly a part of good systems engineering. But the flavor of MBSE being pushed by many in the community has equated MBSE to SysML and many have also equated SysML as implemented by MagicDraw. But does SysML and MagicDraw® do all the things we need to do in systems engineering? In particular, do we obtain a good set of system requirements in a form easily used by all the stakeholders?

To begin to answer these questions, let’s go back to Mr. Elm’s paper. He states that the systems engineer must perform the following tasks:

  • Requirements Development
  • Requirements Management
  • Trade Studies
  • System Architecture Development
  • Interface Management
  • Configuration Management
  • Program Planning
  • Program Monitoring and Control
  • Risk Management
  • Product Integration Planning and Oversight
  • Verification Planning and Oversight
  • Validation Planning and Oversight


SysML consists of nine diagram type, most of which were derived from software engineering practices, not systems engineering. Yes, there is overlap between the two, but not as much as the overlap between systems engineering and program management. That becomes obvious from the task list above, many of which include explicitly the word “management.”

SysML also has proven to be very difficult for most other disciplines to understand, since they speak other languages. It also takes at least two large books, “A Practical Guide to SysML” and “SysML Distilled.” In comparison, the Lifecycle Modeling Language provides a whole systems engineering ontology and limited diagramming that completely subsume SysML. However, LML can still be explained in a very thin book, “Essential LML.” You can see the comparison in the picture below.

You probably are asking, “How is that possible that LML subsumes SysML?” By using an ontological approach that defines a set of entity classes and their relationships, along with the attributes on both, LML provides all the elements of a real language (nouns, verbs, adjectives and adverbs). This ontology can be used to capture the information easily and efficiently. Then that information can be displayed in many ways, including all the nine SysML diagram types.

The Innoslate® tool proves this assertion, as it produces all nine SysML diagrams (and many more) from this ontology as extended in Version 1.1 of the LML specification. In addition to the SysML diagrams, Innoslate produces the LML Action Diagram, that represents the same information as the SysML Activity Diagram, but in a significantly more understandable form. We can see this when we compare side by side the two type of diagrams, as shown below.

LML Action Diagram Example

SysML Activity Diagram

In the SysML diagram I need to know what the diamond and fork symbols mean. In the Action Diagram, I know exactly what they mean, because the words: OR, LOOP, and Decomposed, make their intent clear. In addition, in SysML I cannot just allocate the decision points to who or what performs them. I can in LML. Of course, if there were only two symbols I needed to decipher, then I would not care as much, but SysML has over 30 such symbols. You will need a “3-D decoder ring” to fully understand how to use all the symbols, hence the very large books and long training classes needed to try to learn SysML. This learning curve translates into a significant investment in the workforce to get them up to speed on this complex language. Of course, the electrical engineers, mechanical engineers, logistics experts, and all the other disciplines have their own languages and have no interest in learning something this complex.

You might say, “but of course MagicDraw overcomes these limitations in SysML?” The answer to that is not as well. In particular, if we go back to the list of systems engineering tasks, MagicDraw only does one “well:” System Architecture Development. Although MagicDraw has some limited requirements capability, almost everyone uses another requirements tool in conjunction with it. Innoslate by comparison has a robust Requirements View that includes automated requirements quality checking using the artificial intelligence technique of Natural Language Parsing (NLP). The requirements then can be directly traced to the diagram entities within the same tool, resulting in one database and no configuration management problems that you encounter having two databases. Innoslate also has a built in Test Center for the V&V activities. In addition, Innoslate provides Discrete Event and Monte Carlo simulators to verify that the Action (or Activity) Diagrams have been correctly done. We use that same approach to support Program Planning, Monitoring and Control.

So back to the ROI discussion. Can MBSE provide a healthy ROI? Only if we do all the things we need to do in systems engineering. In addition, if we can use modern technology to help automate these difficult tasks, we can provide even higher ROI than systems engineering by itself. Innoslate and LML provide a means to provide this higher ROI, while MagicDraw and SysML actually cost much, much more to implement and you end up with a poorer result. So, if you want ROI from your MBSE investment, use Innoslate and LML.

Join us for a live webinar on this topic, “Is There a ROI From MBSE” on Thursday, October 17th at 11:00 am ET. Register Here.

Webinar: What’s New in Innoslate 4.2?

Join us Wednesday, July 31st at 11:00 am for “What’s New in Innoslate 4.2?” Innoslate 4.2 brings a lot of new features and updates to improve the capabilities of systems engineering and requirements management.

Register here

Your host, Dr. Steven Dam, will walk you through all the changes from Innoslate 4.2. He’ll show you the new Charts View and how to create and edit XY plots. You’ll also get to see how you can generate Systems Requirements Documents (SRD) right from an asset diagram. Other new features that will be shown:

  • Support Dashboard
  • Roll Up Models
  • Entity Definition Report
  • Document Template Generation

Copy and paste this link into your browser:

Can’t make the webinar? Come back here for a recording.

innoslate, systems engineering, model-based systems engineering, requirements management

Innoslate 101: A Webinar for New Users

Come join us June 6th at 11:00 am EDT for “Innoslate 101: A Webinar for New Users.” Newbie Innoslate user, Joannah Moore, is going to show you just how easy it is to learn Innoslate. She will walk you through the ins and outs of the tool and show you how you can become an expert Innoslate user in no time.

Stay after the live demonstration for a question and answer session with systems engineering expert, Dr. Steven Dam.

Register Today

About Your Host

Joannah Moore is in both Sales & Support at SPEC Innovations. Before SPEC, Joannah’s career was on a strict business path including Commercial Insurance and Property Management. However, years into property management, she was hungry for more. This brings us to 2018 when she joined SPEC Innovations as a recent college graduate with her B.S. Degree in Business-IT Management. She is a certified IT professional with many IT certifications in the various fields of IT, including project management

About Innoslate

Innoslate is the model-based systems engineering solution of the future. An all-in-one software package made for systems engineers and program managers, you can keep your requirements management, modeling and simulation, test management, and more all in one place. Smarter, more successful systems start here. Create a trial account at

Why Do We Model?

We often say that the job of a systems engineer is to “optimize the system’s cost, schedule, and performance, while mitigating risks in each of these areas.” Note that this is essentially the same thing that the program manager does for the program, hence the close relationship between the two disciplines.

Everyone is talking about “Model-Based Systems Engineering” or MBSE, but why are we modeling? What are we supposed to be getting out of these models? To answer these questions, we have to go back to basics and talk about what we are doing as systems engineers.

Another aspect of systems engineering is that we need to be the honest broker by optimizing the design between all the different design disciplines. The picture below shows what would happen if we let any one particular discipline dominate the design.

Our modeling must support both these optimization goals: 1) cost, schedule, performance, and risk; 2) design disciplines. So how does modeling support that?

Using the Lifecycle Modeling Language and its implementation in the Innoslate® tool, we easily accomplish both tasks. For the cost, schedule, and performance optimization, we use only two diagrams: Action and Asset; along with the ontology entity classes of Actions, Assets, Input/Output, and Conduits as the primary entities in these diagrams. But Innoslate® has included Resources as well as allocation of Actions to Assets (performed by/performs relationship) and Input/Outputs to Conduits (transferred by/transfers). This capability to allocate entities to each other allows the functional model to be constrained by the physical model. This constraint occurs by the fact that Input/Outputs have a size and the Conduits have latency and capacity. Thus, we can calculate the appropriate delays for transmission of data or power or any other physical flow.

The Resources can be used to represent key performance parameters like weight (mass) and power. Actions can produce, seize, or consume Resources. Another key performance parameter is timing. Time is included in the Action as the duration for each step and of course each Action can be decomposed and the timings associated with each of these subordinate steps can accumulate to result in the overall system timings of interest. We can see how this approach gives us the necessary information to predict performance of the system.

Note that we can model the business, operations, or development processes this same way and thus use this modeling to derive the overall schedule for the program. So, we get to the Schedule part of optimization as well using the same approach. Talk about reducing confusion between the systems engineering and program management disciplines.

But let’s not forget Cost. Since LML defines an independent Cost class, we can use that to identify the costs incurred by personnel in each step of the process and consumption of resources.

So now if we can dynamically accumulate these performance parameters, schedule, and cost elements through process execution, we have the first part of our first optimization goal. Of course, we can easily execute this model by using the discrete event simulator built into Innoslate®. Execution of the model will occur for at least one path through the model. The tool accumulates the values for cost, produces a Gantt Chart schedule, and tracks the Resource usage over time, which leads us to performance.

But how do we get to risk? That’s where we find out that the values we use for the size, latency, capacity, duration, and other numerical attributes of these entities can be represented by distributions. With these distributions, we can now execute the built-in Monte Carlo simulator to execute the model as many times as needed to create the distributions for cost, schedule and performance (Resources). These distributions represent the uncertainties of achieving each item. Those uncertainties are directly related to the probabilities of occurrence for the risk in each area. If we add consequence to this probability, we have the estimated value for Risk. Of course, LML gives us a Risk class, which has been fully implemented in Innoslate® and visualized using the Risk Matrix diagram.

Now that we have the first optimization complete, how do we get to the next one: optimization across the design disciplines. LML comes into play there as well. LML is an easy to understand language designed for all the stakeholders, including the design engineers, management, users, operators, cost analysts, etc. They all can play their role in the system development, many using their own tools. LML provides that common language that anyone can use and we can easily translate what the Electrical or Mechanical or whatever Engineer does into this language. Innoslate® also provides the capability to store and view CAD files. Results from Computational Fluid Dynamics (CFD) codes or other physics-based models can also be captured as Artifacts. We can take the summary results and translate them into the performance distributions used in the Monte Carlo calculations. For example, if we use Riverbed to characterize the capacity (bandwidth) and latency of a network, we take those resulting distribution and use them to refine our model. We can then rerun the Monte Carlo calculation and see the impact.

LML and Innoslate® give us the capability to meet the optimization goals for all systems engineering and program management in a simple, easy to explain to decision makers way. Think of LML and Innoslate® as modeling made useful.

Implement a Strategy for Transforming from Office Products to Model-Based Systems Engineering (MBSE)

Often people will ask me: “Who is your major competitor?” My usual response is “Microsoft Office.” I don’t say this because MS Office is a bad tool … it is not. It is a very good tool for publishing information. I use it often and have become quite the expert in using it to write books, papers, accounting, and presentations. But for systems engineering, it’s not the right toolset. Unfortunately, most people trying to perform SE tasks use MS Office because that’s the only tool they have. It’s cheap and already approved for use by management, but it does not provide all the capabilities SEs need. You can’t easily perform the kinds of analyses we need, such as functional analysis, simulation, requirements analysis, and risk analysis. That’s not to say that you can’t perform these kinds of analyses in the same way as your parents (or grandparents). They did it, but usually it was with armies of people. They did it with relatively simpler systems. They did it using libraries with librarians to help them find the things they needed. If you think I am exaggerating, I actually saw this in effect as late as 1986, before we had extensive implementation of personal computers.

But with the widespread availability of networked, high performance computer systems providing ready access to amazing processing capabilities, and the breadth of the worldwide web, we also have new tools that can do much more. And it’s a good thing, because at the same time this technology has caused system complexity to grow exponentially. We no longer have the “armies of people” and “librarians” available to help us do the work. So we have to do more with less.

So, let’s say your management has finally realized that you need better tools, or they just want to be “fully buzzword compliant” by jumping on the MBSE train. Now how are you going to come up to speed on a new toolset, while still continuing to meet cost and schedule?

The purpose of this paper is to help you make the shift from products like MS Office to a true MBSE tool: Innoslate®. Some of these strategies may be useful in migrating from Office to other tools, but other tools really don’t have all the features you need and when you put together the set you will have to spend a lot more on those other tools. Money being spent isn’t just the cost of the tools, but the people costs of operating them. Much like Office, even the toolsets being offered by others are really just a set of individual tools that were loosely “integrated” to provide a package. That’s why they have so many “plug-ins.”

So on with the strategies.

Strategy 1: Start Slow

Just like you can’t eat an elephant all at once (and I don’t recommend eating elephants at all!), you should migrate your information a piece at a time. For example, perhaps you have a big library of Visio diagrams and you want to reuse in Innoslate®. You might say, “well does Innoslate® have a way to import diagrams from Visio?” The answer is no. The reason we don’t is that tools like Visio are just drawing tools that don’t provide “semantic” meaning. One definition of this term is: “of, relating to, or arising from the different meanings of words or other symbols.” In other words, drawings require a significant set of rules to enable them to represent the information. An example of this from flowcharting is the use of a diamond to represent a decision or a rectangle to represent a process. Both the writer and reader of these diagrams must fully understand those representations for the chart to have any meaning. Unfortunately, with a pure drawing tool like Visio, people will use these symbols incorrectly or in different ways, which makes it difficult for the reader to really know what the writer meant. We have the same problem with unstructured words as well. Writers will use obscure words or use the words incorrectly, which interferes with the communication.

This problem is why MBSE has taken off as a concept to enhance the communications. The tools can help enforce the rules for diagrams. The diagrams can even be analyzed automatically by the computer algorithms to suggest improvements to the diagrams to make them more compliant to the rules.

So, what does this mean? It means that the diagrams as drawn in Visio are likely in error. Just moving the boxes and lines over to another tool will also bring all the errors with them. What should you do?

We recommend taking a few of the diagrams you are using right now, put them on one side of your desk (or in your second monitor, if you have one), and start a new diagram in Innoslate®. We recommend starting with the Action Diagram for a flow or process chart. You will need to interpret the information that’s on the current diagram to create the new one. You will want to take advantage of the capability in Innoslate® to decompose Actions, thus simplifying large diagrams. That allows you to identify subprocesses, which may be repeated in various parts of the diagram. In this way, you will gain a better understanding of the tool and the limitations (rules) that govern the diagrams.


Strategy 2: Only Start on a New Task

In this approach you keep legacy information separate or use the Innoslate® Artifact class entities to store the files from previous work, so you can find them if you need them. If you don’t have a strong requirements document from your customer (and you usually don’t), we recommend again starting with the Action Diagram and capture the current operational or business processes using that approach. The purpose of these models is to identify where the problem areas exist and then you can postulate solutions to those problems.


Strategy 3: Start with an Innoslate® Workshop

An Innoslate® Workshop provides a means to make learning Innoslate® easier by having our trainers work directly with your problem as the basis for the training. The training is tailored to your processes, situation (such as the phase of development), and problem so that enables the training to have greater relevance to your people. It has the added benefit of helping you get started with solving your particular problem. Don’t worry about us knowing too much about your business. We are happy to sign any appropriate non-disclosure agreements (NDAs) and our personnel have the necessary clearances to help you with your problem at any level of classification. SPEC Innovations is a woman-owned small business, so the chances of a conflict of interest is negligible.


Start Any Time

This in a sense is not a strategy, because it applies regardless of the situation. Timing is never perfect for moving from one way of doing things to another. The sooner you get started the sooner you can reap the benefits of MBSE. Just get in there and apply any or all of the strategies above to get started. Let us help you today. We will help you make it as painless as possible. Ben Franklin said: “There are no gains without pains.” It applies here as well. Just like starting your exercise program in the new year, the sooner you start, the sooner you feel better.

A Synopsis of CSER 2019 (Conference on Systems Engineering Research)

Thoughts from the Conference on Systems Engineering Research (CSER) held last week at the National Press Club in Washington DC.  

By Christian Manilli


As many of you may know the CSER was organized by the Systems Engineering Research Center (SERC), the University-Affiliated Research organization of the US Department of Defense.  The SERC leverages the research and expertise of senior researchers from 22 universities throughout the United States.  The SERC is unique in the depth and breadth of its reach, advocacy and evangelism of the systems engineering community through its support of research and education of systems engineering.  Events like CSER show the commitment to the advocacy of the Systems Engineering profession. 


Ok enough of the ad for the SERC. The two day event was filled with an impressive line-up of speakers with deep understanding of practice and nuance of the subject.  The venue was exceptional and the opportunity to interact with peers and colleagues from academia, industry and the research community was eye opening.  


As I am a relatively new member of the SPEC Innovations team I had the good fortune to attend the conference with Dr. Steve Dam.   We shared with the CERC community information about a free for academic use, model-based systems engineering software, Innoslate. Innoslate is currently being used in over 100 Universities around the world. The first day had a succession of remarkable speakers, the first Key Note was James Faist who is the director of Research and Engineering for DOD and he covered Advanced Capabilities.  Break out sessions were held during the mid-morning and topics covered ranged from AI to Agile SE to SE effectiveness.  Lunch then the second Key Note was given by Dr Jeffery Wilcox Head of Digital Transformation at Lockheed Martin.   Who would have known that interfaces between systems and process are becoming more important than they already were.  This was followed by afternoon breakout sessions and an evening reception.  


It was funny during the morning of the first day I noticed an older gentleman at the next table who seemed to be keeping his own company.  I nodded to him he nodded back and I went about going to a break-out session.  Day two’s morning Keynote speaker was Kristen Baldwin Deputy Director for Strategic technology Protection and detection at DOD.  She covered Strategic Technology Protection and Exploitation.  This was followed up by breakout sessions that ranged from Model Based Engineering to SE Decision Making and Resilience.  Lunch was next followed by the final Keynote speaker.   To my surprise the final speaker was also the older gentleman that I had noticed by himself the previous day. He happened to be William Shepard a retired Navy Captain who previously had served as a Navy SEAL platoon commander and Operations Officer.  About mid-way through his Naval career Captain Shepard decided to put in a package to the Astronaut program.  He was selected to the Astronaut Corps and made three space shuttle flights.  He then went on to command the initial flight that built the International Space Station.


His Keynote presentation started with a video presentation of the preparation and training of his Space Station crew.  It was a three man team commanded by Captain Shepard and two Russian Flight Engineers.  All of the training and the launch was conducted in Russia and Kazakhstan. They reached orbit and began the initial construction of the International Space Station.  Several Shuttle missions docked with the Station during this time as construction of the Station grew to add a living space, solar panel and multiple labs. After more than four months the crew returned to earth. 


What struck me the most about Captain Shepard’s address was the enormous Systems Engineering challenge that was overcome during the planning, design, integration, test and most importantly to them in the operations of the crew.  If you think of the challenge of a joint NASA/Russian space mission the Systems Engineering challenges are enormous.  All three crewmen needed to be fluent in both English and Russian.  Systems and modules were made by multiple NASA contractors, multiple Russian contractors and multiple European Space Agency contractors.   Yet all were able to be integrated and distilled into a model-based approach.  Captain Shepard said that the key to adding the new modules, troubleshooting onboard issues and scheduling upgrades was the Model based approach that was used.  If he and the Russian crew members would come to a language impasse they would reach for the SE models that both intertwined in depth schematics, color coded processes and color coded call outs.  These models were used to ensure that all crew members understood the approach, and then they could take action.


However, the models he was talking about were the “old fashioned” physical models, not Model-based Systems Engineering (MBSE) digital models. The question is can we replace the physical models with digital ones? The answer “yes” can only happen when we obtain enough data to fully represent the physical system and have a way to organize and visualize that data. Innoslate provides a large step in that direction. It already provides the cloud-native access to large amounts of computational power and data storage. It also already has many ways to visualize and track the data in the database. What’s the next step? Continue to watch as SPEC Innovations pushes the boundaries of MBSE.

Back to School: Using Innoslate® as a Systems Engineering Research Tool

When I worked on my dissertation research, I went out to Los Alamos, New Mexico to perform that research. I was privileged to work at the Clinton P. Anderson Meson Physics Facility, which went under the acronym of LAMPF. Today it has been renamed the Los Alamos Neutron Science Center. LAMPF was a medium energy linear proton accelerator (800 MeV) that was also used to produce pi-mesons (pions) and mu-mesons (muons) that we used for basic nuclear physics research. So this very expensive tool, which was designed and built by many physicist and engineers before me, led by an amazing man, Dr. Louis Rosen (who let me call him Louie!), was a critical part of my ability to further research into nuclear physics. Another tool I used was a magnetic spectrometer that was two stories tall, which was built by other physicists and engineers. To enable my research I designed and had built (they had an incredible machine shop and people who knew how to build things) a scattering chamber. They also had in this facility fantastic computational capabilities at the time in the form of DEC PDP-11s, VAX 11/780s, and CDC 7600s. All these tools enabled me to perform my experiments so I could meet my primary goal of obtaining my Ph.D. in Physics.

I know about now you are wondering what this has to do with systems engineering. As it turned out, I learned systems engineering the hard way by going through the process of developing the experiment plan, staffing it, organizing the team (I took the graveyard shift as I was the lowest member in rank of our team even though I was effectively leading it), collecting the data, analyzing the data, and producing papers and my dissertation. If you are a systems engineering student and you are starting your senior thesis project or Masters/Ph.D. research projects you need to use the tools available for systems engineering so you do not “reinvent the wheel.” The whole idea of such research, particularly at the Masters and Ph.D. level, is to extend the art and science of systems engineering.

So what tools do you plan to use for your research? I have watch as many students start with nothing but a computer and software language, like Java or C++ or Python, and go from there to reinvent pieces of tools already out there. They often need it, just like I felt I needed a new scatter chamber, because they aren’t aware of the tools that are available to build upon.

Innoslate® was designed to be a research tool for systems engineering. It can be used, of course, to perform most of the systems engineering tasks, such as requirements analysis, functional analysis, modeling and simulation, and even test and evaluation. So if you are in your Senior Design project, you can use the tool for free to advance your topic area analyses. Most of those kinds of projects are practical applications, often supported by a company or government organization. By using Innoslate® you are using a cutting edge tool that incorporates today’s technologies, such as cloud computing and NLP (natural language processing, a branch of artificial intelligence). If you are pursuing an advanced degree, you can use the tool to explore ontologies for digital engineering by using the schema extender. If you are interested in creating new ways to look at the systems engineering information, you can use the APIs to leverage the tremendous capabilities of the tool to create new user interfaces and visualizations, thus exploring the boundaries of Human-Computer Interfaces (HCI). You can also use the built-in Discrete Event and Monte Carlo simulators to make synchronous calls to other web services and obtain information from them to simulate different events and their effects on the system of interest. Since Innoslate® was designed for scalability, you can also pursue the bounds of “Big Data” by exploring predictive analytics.

SPEC Innovations, the developer of Innoslate®, is happy to support your efforts. We provide a free version, with all the features limited to 2,000 entities per project. That’s automatic when you register with your “.edu” address. If you need more entities, ask us. We can, on a case by case basis, provide you with an unlimited number of entities. If your research is sensitive, we have made special arrangements with Service organizations, such as the US Naval Postgraduate School, and US Air Force Academy, to have their own copies of the tool to put on their private clouds. We also provide organizations for individual Universities upon arrangement with their Professors, Departments, and Schools.

We only ask one thing in return. Please share with us the results of your work. Send us a link or better your papers, theses, or dissertations, so we can post them on our website. Together we can keep systems engineering moving forward.

Model-Based Systems Engineering De-Mystified

Join us August 30th at 2:00 pm ET for a special guest webinar with Dr. Warren Vaneman. Model-Based Systems Engineering (MBSE) is an ambiguous concept that means many things to many different people. The purpose of this presentation is to “de-mystify” MBSE, with the intent of moving the sub-discipline forward. Model-Based Systems Engineering was envisioned to manage the increasing complexity within systems and System of Systems (SoS). This presentation defines MBSE as the formalized application of modeling (static and dynamic) to support system design and analysis, throughout all phases of the system lifecycle, and through the collection of modeling languages, structures, model-based processes, and presentation frameworks used to support the discipline of systems engineering in a model-based or model- driven context. Using this definition, the components of MBSE (modeling languages, processes, structures, and presentation frameworks) are defined. The current state of MBSE is then evaluated against a set of effective measures. Finally, this presents a vision for the future direction of MBSE.

Register here


Meet Your Host

Dr. Warren Vaneman is a Professor of Practice in the Systems Engineering Department at the Naval Postgraduate School, Monterey, CA. He has more than 30 years of leadership and technical positions within the U.S. Navy and the Intelligence Community. Dr. Vaneman has been conducting research in MBSE for unmanned systems, enterprise systems and system of systems since July 2011. To enhance his research efforts Dr. Vaneman teaches several courses in Systems Engineering and Architecting and System of Systems Engineering and Integration. Prior to joining NPS, Dr. Vaneman has held various systems engineering positions within the Intelligence Community, including Chief, Architecture Analysis Division, and Chief Architect of the Enterprise Ground Architecture at the National Reconnaissance Office (NRO), and Lead Systems Engineer for Modeling and Simulation at the National-Geospatial Intelligence Agency (NGA). Dr. Vaneman is also a Retired Captain in the Navy Reserve, where he was qualified as a Surface Warfare Officer, Space Cadre Expert, and Information Dominance Warfare Officer. He had the pleasure of serving in six command tours, including a command tour in Afghanistan. He has a B.S. from the State University of New York Maritime College, a M.S. in Systems Engineering, and a Ph.D. in Industrial and Systems Engineering from Virginia Tech, and a Joint Professional Military Education Phase 1 Certificate from the Naval War College.



Data Analytics – Paving the Way for the Future of Digital Engineering

I recently attended the first Andrew P. Sage Senior Design Capstone Competition at George Mason University. This conference included student papers and presentations from GMU, West Point, University of Pennsylvania, US Naval Academy, Stevens Institute of Technology, and Virginia Tech. The conference is named for Andy Sage, who was the first Dean of Engineering at GMU and a prolific writer in the field of systems engineering. The students and faculty did him proud.

But perhaps the most impactful presentation on me was that of the keynote speaker, Dr. Kirk Borne. His topic was: “Using Analytics to Predict and to Change the Future.” He was coming at the problem from a “Big Data” point of view, beginning early in the presentation with the picture below talking about Zettabytes of data from airline engines. That is 1 x 1021 bytes of data.

I have often noted that in systems engineering, particularly in the early concept development phase, I have a sparse dataset, not a large one. In cutting edge work, such as defense applications, we often have only basic research where the massive data from other systems may not relate well to the new concept. However, during the presentation, I found myself writing many notes to myself about how the same concepts work even for smaller datasets.

Then I realized that we are already applying these kinds of techniques to Innoslate, as a result of applying natural language processing (NLP) to the information we are gathering and developing to create the system model.

For those new to NLP, Wikipedia defines it as “an area of computer science and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to fruitfully process large amounts of natural language data.” We currently use NLP in three of our Innoslate analytical tools: Requirements Quality Checker; Intelligence; and Traceability Assistant. The first two tools have been around for a while, but the Traceability Assistant is new with version 4.0.

If you are not familiar with the Requirements Quality Checker, it automates one of the more difficult problems in requirements management: knowing when you have good requirements. The picture below shows an example. The NLP algorithm assesses six of the eight quality attributes (Clear, Complete, Consistent, Design, Traceable, and Verifiable) shown on the sidebar below, and rolls them up into a quality score.

requirements management view and quality checker

We use this information to identify problems with the requirements and suggested fixes. Often those fixes are simple, such as forgetting a punctuation mark to complete the sentence or including a key verb (i.e., “shall”). You can always override the suggestion and select that it passes the test. All such changes are recorded in the History record for that entity.

Intelligence View also applies NLP technology against over 65 heuristics (i.e., rules of thumb) that represent best practices. Again, the NLP comes into play by looking at roots of words and comparing them, so it quickly recognizes that Wildfire and Wildfires are potentially the object. You can also select the “Fix” button and a window pops up letting you know what the problem is and helps you fix it (see right).

Finally, our newest application of NLP technology comes in the form of the Traceability Assistant. Innoslate’s Traceability Assistant is the “dream come true” for all of us who have been working with relational databases. The real challenge has been how to relate information between different classes of data. In fact, I was working on mapping two related policy documents the other day and went to my developers and asked, “Is there some way to automate this process of tracing requirements between documents?” Then they showed me what they were working on: The Traceability Assistant. They used the NLP technology to read the information contained in the name and description fields of every item for comparison and then determine if it is a match and how good a match it might be. In the example below, we can see different shades of green, where the darker green indicates a higher probability match. Now it’s just an algorithm and you may not agree with the conclusions, so you must put the “X” in the box, but the tool also shows the full name and description of the row and column entities so that you can make an informed decision. The best part is this works with any relationships between entity classes. So we can use this for functional allocation, as well as requirements traceability, and all the other connecting relationships. Can you imagine the productivity increase from this?

traceability assistant hierarchical comparison matrix

Innoslate also has a suspect assist, so that if relationships have been already created and reviewed, but then changes are made, it will help identify when the information entities should likely not be connected. This feature isn’t like many other tools provide, which show that you changed something, so all the information downstream is suspect. That can lead to someone cleaning up the grammar causing a major review down the entire chain. What a waste of time and energy. Innoslate’s Suspect Assistant highlights in shades of red the probability that traced entities should no longer be connected. It can also be used when you have done a set of manual connections to identify that not enough information has been provided in the name/description to validate a connection between the entities. This feature will then help you identify where you need to enhance the clarity between connected entities.

suspect assistant hierarchical comparison matrix

Both these tools are available in the traceability matrix diagram provided in Innoslate 4.0. Our commitment to the customer and application of emerging technologies, such as LML, cloud computing, and NLP technology, demonstrates that Innoslate is the tool for enabling 21st Century Digital Engineering.


The Future of Systems Engineering


I attended an interesting systems engineering forum this past week. A fair number of government organizations and contractors were participants. There were many interesting observations from this forum, but one presenter from a government agency said something that particularly struck me. He was saying that one of the major challenges he faced was finding people who were trained in UML and SysML. It made me think: “Why would it be difficult to find people trained in UML? Wasn’t UML a software development standard for nearly the last 20 years? Surely it must be a major part of the software engineering curriculum in all major universities?”

The Unified Modeling Language (UML) was developed in the late 1990s-early 2000s to merge competing diagrams and notations from earlier work in object-oriented analysis and design. This language was adopted by many software development organizations in the 2000s. But as the graph below shows, search traffic for UML has substantially declined since 2004.

This trend is reinforced by a simple Google search of the question: “Why do software developers not use UML anymore?”

It turns out that the software engineering community has moved on to the next big thing: Agile, which now systems engineers are also trying to morph into a systems engineering methodology, just like when they created the UML profile: Systems Modeling Language (SysML).

This made me wonder, “Do Systems Engineers try to apply software engineering techniques to perform systems engineering and thereby communicate better with the software engineers?” I suddenly realized that I have been living through many of these methodology transitions from one approach to software development and systems development to another.

My first experience in modeling was using flow charts in my freshman year class on FORTRAN. FORTRAN was the computer language used mostly by the scientific community in the 1960s through the 1990s. We created models using a standard symbol set like the one below.

Before the advent of personal computers these templates were used extensively to design and document software programs. However, as we quickly learned in our classes, it was much quicker to write-execute-debug the code than it was to draw by hand these charts. Hence, we primarily used flowcharts to document, not design the programs.

Later in life, I used a simplified version of this to convert a rather large (at the time) software program from the Cray computers to the VAX computers. I used a rectangular box for most steps and a rectangular box with a point on the side to represent decision points. This similar approach provided the same results in a much easier to read and understandable way. You didn’t have to worry about the nuanced notations or become an expert on them.

Later, after getting a personal computer (a Macintosh 128K) I discovered some inexpensive software engineering tools that were available for that platform. These tools were able to create Data Flow Diagrams (DFDs) and State Transition Diagrams (STDs). At that time, I had moved from being a software developer into project management (PM) and systems engineering (SE). So, I tried to apply these software engineering tools to my systems problem, but they never seemed to satisfy the needs of the SE and PM roles I was undertaking.

In 1989, I was introduced to a different methodology in the RDD-100 tool. It contained the places to capture my information and (automatically) produced diagrams from the information. Or I could use the diagrams to capture the information as well. All of a sudden, I had a language that really met my needs. Later CORE applied a modified version of this language and became my tool of choice. The only problem was no one had documented the language and went to the effort of making it a standard, so arguments abounded throughout the community.

In subsequent years I watched systems engineers argue between functional analysis and object-oriented methods. The UML camp was pushing object-oriented tools, such as Rational Rose, and the functional analysis tools, such as CORE. We used both on a very major project for the US Army (yes that one) and customers seems to understand and like the CORE approach better (from my perspective). On other programs, I found a number of people using a DoD Architecture Framework (DoDAF) tool called Popkin Systems Architect, which was later procured by IBM (and more recently sold off to another vendor). Popkin included two types of behavioral diagrams: IDEF0s and DFDs. IDEF0 was another software development language that was adopted by systems engineers while software developed had moved on to the object-oriented computer and modeling languages.

I hope you can now see the pattern: software engineers develop and use a language, which is later picked up by the systems engineering community; usually at a point where its popularity in the software world is declining. The systems engineering community eventually realizes the problems with that language and moves on. However, one language has endured. That is the underlying language represented in RDD-100 and CORE. It can trace its roots back to the heyday of TRW in the 1960s. That language was invented and used on programs that went from concept development to initial operational capability (IOC) in 36 months. It was used for both the systems and software development.

But the problem was, as noted above, there was no standard. A few of us realized the problems we had encountered in trying to use these various software engineering languages and wanted to create a simpler standard for use in both SE and PM. So, in 2012 a number of us got together and developed the Lifecycle Modeling Language (LML). It was based on work SPEC Innovations has done previously and this group validated and greatly enhanced the language. The committee “published” LML in an open website ( so it could be used by anyone. But I knew even before the committee started that the language could not be easily enhanced without it being instantiated in a software tool. So, in parallel, SPEC created Innoslate®. Innoslate (pronounced “In-no-Slate”) provided the community with a tool to test and refine the language and to use it to map to other languages, including the DoDAF MetaModel 2.0 (DM2) and in 2014 SysML (included in LML v. 1.1.). Hence, LML provides a robust ontology for SysML (and UML) today. But it goes far beyond SysML. Innoslate has proven that many different diagram types (over 27 today) can be generated from the ontology, including IDEF0, N2, and many other forms of physical and behavioral models.

Someone else at the SE Forum I attended last week said something also insightful. They talked about SysML as the language of today and that “10 years from now there may be something different.” That future language can and should be LML. To quote George Allen (the long-departed coach of the LA Rams and Washington Redskins): “The Future is Now!”