Why Do We Model?

We often say that the job of a systems engineer is to “optimize the system’s cost, schedule, and performance, while mitigating risks in each of these areas.” Note that this is essentially the same thing that the program manager does for the program, hence the close relationship between the two disciplines.

Everyone is talking about “Model-Based Systems Engineering” or MBSE, but why are we modeling? What are we supposed to be getting out of these models? To answer these questions, we have to go back to basics and talk about what we are doing as systems engineers.

Another aspect of systems engineering is that we need to be the honest broker by optimizing the design between all the different design disciplines. The picture below shows what would happen if we let any one particular discipline dominate the design.

Our modeling must support both these optimization goals: 1) cost, schedule, performance, and risk; 2) design disciplines. So how does modeling support that?

Using the Lifecycle Modeling Language and its implementation in the Innoslate® tool, we easily accomplish both tasks. For the cost, schedule, and performance optimization, we use only two diagrams: Action and Asset; along with the ontology entity classes of Actions, Assets, Input/Output, and Conduits as the primary entities in these diagrams. But Innoslate® has included Resources as well as allocation of Actions to Assets (performed by/performs relationship) and Input/Outputs to Conduits (transferred by/transfers). This capability to allocate entities to each other allows the functional model to be constrained by the physical model. This constraint occurs by the fact that Input/Outputs have a size and the Conduits have latency and capacity. Thus, we can calculate the appropriate delays for transmission of data or power or any other physical flow.

The Resources can be used to represent key performance parameters like weight (mass) and power. Actions can produce, seize, or consume Resources. Another key performance parameter is timing. Time is included in the Action as the duration for each step and of course each Action can be decomposed and the timings associated with each of these subordinate steps can accumulate to result in the overall system timings of interest. We can see how this approach gives us the necessary information to predict performance of the system.

Note that we can model the business, operations, or development processes this same way and thus use this modeling to derive the overall schedule for the program. So, we get to the Schedule part of optimization as well using the same approach. Talk about reducing confusion between the systems engineering and program management disciplines.

But let’s not forget Cost. Since LML defines an independent Cost class, we can use that to identify the costs incurred by personnel in each step of the process and consumption of resources.

So now if we can dynamically accumulate these performance parameters, schedule, and cost elements through process execution, we have the first part of our first optimization goal. Of course, we can easily execute this model by using the discrete event simulator built into Innoslate®. Execution of the model will occur for at least one path through the model. The tool accumulates the values for cost, produces a Gantt Chart schedule, and tracks the Resource usage over time, which leads us to performance.

But how do we get to risk? That’s where we find out that the values we use for the size, latency, capacity, duration, and other numerical attributes of these entities can be represented by distributions. With these distributions, we can now execute the built-in Monte Carlo simulator to execute the model as many times as needed to create the distributions for cost, schedule and performance (Resources). These distributions represent the uncertainties of achieving each item. Those uncertainties are directly related to the probabilities of occurrence for the risk in each area. If we add consequence to this probability, we have the estimated value for Risk. Of course, LML gives us a Risk class, which has been fully implemented in Innoslate® and visualized using the Risk Matrix diagram.

Now that we have the first optimization complete, how do we get to the next one: optimization across the design disciplines. LML comes into play there as well. LML is an easy to understand language designed for all the stakeholders, including the design engineers, management, users, operators, cost analysts, etc. They all can play their role in the system development, many using their own tools. LML provides that common language that anyone can use and we can easily translate what the Electrical or Mechanical or whatever Engineer does into this language. Innoslate® also provides the capability to store and view CAD files. Results from Computational Fluid Dynamics (CFD) codes or other physics-based models can also be captured as Artifacts. We can take the summary results and translate them into the performance distributions used in the Monte Carlo calculations. For example, if we use Riverbed to characterize the capacity (bandwidth) and latency of a network, we take those resulting distribution and use them to refine our model. We can then rerun the Monte Carlo calculation and see the impact.

LML and Innoslate® give us the capability to meet the optimization goals for all systems engineering and program management in a simple, easy to explain to decision makers way. Think of LML and Innoslate® as modeling made useful.

Implement a Strategy for Transforming from Office Products to Model-Based Systems Engineering (MBSE)

Often people will ask me: “Who is your major competitor?” My usual response is “Microsoft Office.” I don’t say this because MS Office is a bad tool … it is not. It is a very good tool for publishing information. I use it often and have become quite the expert in using it to write books, papers, accounting, and presentations. But for systems engineering, it’s not the right toolset. Unfortunately, most people trying to perform SE tasks use MS Office because that’s the only tool they have. It’s cheap and already approved for use by management, but it does not provide all the capabilities SEs need. You can’t easily perform the kinds of analyses we need, such as functional analysis, simulation, requirements analysis, and risk analysis. That’s not to say that you can’t perform these kinds of analyses in the same way as your parents (or grandparents). They did it, but usually it was with armies of people. They did it with relatively simpler systems. They did it using libraries with librarians to help them find the things they needed. If you think I am exaggerating, I actually saw this in effect as late as 1986, before we had extensive implementation of personal computers.

But with the widespread availability of networked, high performance computer systems providing ready access to amazing processing capabilities, and the breadth of the worldwide web, we also have new tools that can do much more. And it’s a good thing, because at the same time this technology has caused system complexity to grow exponentially. We no longer have the “armies of people” and “librarians” available to help us do the work. So we have to do more with less.

So, let’s say your management has finally realized that you need better tools, or they just want to be “fully buzzword compliant” by jumping on the MBSE train. Now how are you going to come up to speed on a new toolset, while still continuing to meet cost and schedule?

The purpose of this paper is to help you make the shift from products like MS Office to a true MBSE tool: Innoslate®. Some of these strategies may be useful in migrating from Office to other tools, but other tools really don’t have all the features you need and when you put together the set you will have to spend a lot more on those other tools. Money being spent isn’t just the cost of the tools, but the people costs of operating them. Much like Office, even the toolsets being offered by others are really just a set of individual tools that were loosely “integrated” to provide a package. That’s why they have so many “plug-ins.”

So on with the strategies.

Strategy 1: Start Slow

Just like you can’t eat an elephant all at once (and I don’t recommend eating elephants at all!), you should migrate your information a piece at a time. For example, perhaps you have a big library of Visio diagrams and you want to reuse in Innoslate®. You might say, “well does Innoslate® have a way to import diagrams from Visio?” The answer is no. The reason we don’t is that tools like Visio are just drawing tools that don’t provide “semantic” meaning. One definition of this term is: “of, relating to, or arising from the different meanings of words or other symbols.” In other words, drawings require a significant set of rules to enable them to represent the information. An example of this from flowcharting is the use of a diamond to represent a decision or a rectangle to represent a process. Both the writer and reader of these diagrams must fully understand those representations for the chart to have any meaning. Unfortunately, with a pure drawing tool like Visio, people will use these symbols incorrectly or in different ways, which makes it difficult for the reader to really know what the writer meant. We have the same problem with unstructured words as well. Writers will use obscure words or use the words incorrectly, which interferes with the communication.

This problem is why MBSE has taken off as a concept to enhance the communications. The tools can help enforce the rules for diagrams. The diagrams can even be analyzed automatically by the computer algorithms to suggest improvements to the diagrams to make them more compliant to the rules.

So, what does this mean? It means that the diagrams as drawn in Visio are likely in error. Just moving the boxes and lines over to another tool will also bring all the errors with them. What should you do?

We recommend taking a few of the diagrams you are using right now, put them on one side of your desk (or in your second monitor, if you have one), and start a new diagram in Innoslate®. We recommend starting with the Action Diagram for a flow or process chart. You will need to interpret the information that’s on the current diagram to create the new one. You will want to take advantage of the capability in Innoslate® to decompose Actions, thus simplifying large diagrams. That allows you to identify subprocesses, which may be repeated in various parts of the diagram. In this way, you will gain a better understanding of the tool and the limitations (rules) that govern the diagrams.

 

Strategy 2: Only Start on a New Task

In this approach you keep legacy information separate or use the Innoslate® Artifact class entities to store the files from previous work, so you can find them if you need them. If you don’t have a strong requirements document from your customer (and you usually don’t), we recommend again starting with the Action Diagram and capture the current operational or business processes using that approach. The purpose of these models is to identify where the problem areas exist and then you can postulate solutions to those problems.

 

Strategy 3: Start with an Innoslate® Workshop

An Innoslate® Workshop provides a means to make learning Innoslate® easier by having our trainers work directly with your problem as the basis for the training. The training is tailored to your processes, situation (such as the phase of development), and problem so that enables the training to have greater relevance to your people. It has the added benefit of helping you get started with solving your particular problem. Don’t worry about us knowing too much about your business. We are happy to sign any appropriate non-disclosure agreements (NDAs) and our personnel have the necessary clearances to help you with your problem at any level of classification. SPEC Innovations is a woman-owned small business, so the chances of a conflict of interest is negligible.

 

Start Any Time

This in a sense is not a strategy, because it applies regardless of the situation. Timing is never perfect for moving from one way of doing things to another. The sooner you get started the sooner you can reap the benefits of MBSE. Just get in there and apply any or all of the strategies above to get started. Let us help you today. We will help you make it as painless as possible. Ben Franklin said: “There are no gains without pains.” It applies here as well. Just like starting your exercise program in the new year, the sooner you start, the sooner you feel better.

Back to School: Using Innoslate® as a Systems Engineering Research Tool

When I worked on my dissertation research, I went out to Los Alamos, New Mexico to perform that research. I was privileged to work at the Clinton P. Anderson Meson Physics Facility, which went under the acronym of LAMPF. Today it has been renamed the Los Alamos Neutron Science Center. LAMPF was a medium energy linear proton accelerator (800 MeV) that was also used to produce pi-mesons (pions) and mu-mesons (muons) that we used for basic nuclear physics research. So this very expensive tool, which was designed and built by many physicist and engineers before me, led by an amazing man, Dr. Louis Rosen (who let me call him Louie!), was a critical part of my ability to further research into nuclear physics. Another tool I used was a magnetic spectrometer that was two stories tall, which was built by other physicists and engineers. To enable my research I designed and had built (they had an incredible machine shop and people who knew how to build things) a scattering chamber. They also had in this facility fantastic computational capabilities at the time in the form of DEC PDP-11s, VAX 11/780s, and CDC 7600s. All these tools enabled me to perform my experiments so I could meet my primary goal of obtaining my Ph.D. in Physics.

I know about now you are wondering what this has to do with systems engineering. As it turned out, I learned systems engineering the hard way by going through the process of developing the experiment plan, staffing it, organizing the team (I took the graveyard shift as I was the lowest member in rank of our team even though I was effectively leading it), collecting the data, analyzing the data, and producing papers and my dissertation. If you are a systems engineering student and you are starting your senior thesis project or Masters/Ph.D. research projects you need to use the tools available for systems engineering so you do not “reinvent the wheel.” The whole idea of such research, particularly at the Masters and Ph.D. level, is to extend the art and science of systems engineering.

So what tools do you plan to use for your research? I have watch as many students start with nothing but a computer and software language, like Java or C++ or Python, and go from there to reinvent pieces of tools already out there. They often need it, just like I felt I needed a new scatter chamber, because they aren’t aware of the tools that are available to build upon.

Innoslate® was designed to be a research tool for systems engineering. It can be used, of course, to perform most of the systems engineering tasks, such as requirements analysis, functional analysis, modeling and simulation, and even test and evaluation. So if you are in your Senior Design project, you can use the tool for free to advance your topic area analyses. Most of those kinds of projects are practical applications, often supported by a company or government organization. By using Innoslate® you are using a cutting edge tool that incorporates today’s technologies, such as cloud computing and NLP (natural language processing, a branch of artificial intelligence). If you are pursuing an advanced degree, you can use the tool to explore ontologies for digital engineering by using the schema extender. If you are interested in creating new ways to look at the systems engineering information, you can use the APIs to leverage the tremendous capabilities of the tool to create new user interfaces and visualizations, thus exploring the boundaries of Human-Computer Interfaces (HCI). You can also use the built-in Discrete Event and Monte Carlo simulators to make synchronous calls to other web services and obtain information from them to simulate different events and their effects on the system of interest. Since Innoslate® was designed for scalability, you can also pursue the bounds of “Big Data” by exploring predictive analytics.

SPEC Innovations, the developer of Innoslate®, is happy to support your efforts. We provide a free version, with all the features limited to 2,000 entities per project. That’s automatic when you register with your “.edu” address. If you need more entities, ask us. We can, on a case by case basis, provide you with an unlimited number of entities. If your research is sensitive, we have made special arrangements with Service organizations, such as the US Naval Postgraduate School, and US Air Force Academy, to have their own copies of the tool to put on their private clouds. We also provide organizations for individual Universities upon arrangement with their Professors, Departments, and Schools.

We only ask one thing in return. Please share with us the results of your work. Send us a link or better your papers, theses, or dissertations, so we can post them on our website. Together we can keep systems engineering moving forward.

The Future of Systems Engineering

 

I attended an interesting systems engineering forum this past week. A fair number of government organizations and contractors were participants. There were many interesting observations from this forum, but one presenter from a government agency said something that particularly struck me. He was saying that one of the major challenges he faced was finding people who were trained in UML and SysML. It made me think: “Why would it be difficult to find people trained in UML? Wasn’t UML a software development standard for nearly the last 20 years? Surely it must be a major part of the software engineering curriculum in all major universities?”

The Unified Modeling Language (UML) was developed in the late 1990s-early 2000s to merge competing diagrams and notations from earlier work in object-oriented analysis and design. This language was adopted by many software development organizations in the 2000s. But as the graph below shows, search traffic for UML has substantially declined since 2004.

This trend is reinforced by a simple Google search of the question: “Why do software developers not use UML anymore?”

It turns out that the software engineering community has moved on to the next big thing: Agile, which now systems engineers are also trying to morph into a systems engineering methodology, just like when they created the UML profile: Systems Modeling Language (SysML).

This made me wonder, “Do Systems Engineers try to apply software engineering techniques to perform systems engineering and thereby communicate better with the software engineers?” I suddenly realized that I have been living through many of these methodology transitions from one approach to software development and systems development to another.

My first experience in modeling was using flow charts in my freshman year class on FORTRAN. FORTRAN was the computer language used mostly by the scientific community in the 1960s through the 1990s. We created models using a standard symbol set like the one below.

Before the advent of personal computers these templates were used extensively to design and document software programs. However, as we quickly learned in our classes, it was much quicker to write-execute-debug the code than it was to draw by hand these charts. Hence, we primarily used flowcharts to document, not design the programs.

Later in life, I used a simplified version of this to convert a rather large (at the time) software program from the Cray computers to the VAX computers. I used a rectangular box for most steps and a rectangular box with a point on the side to represent decision points. This similar approach provided the same results in a much easier to read and understandable way. You didn’t have to worry about the nuanced notations or become an expert on them.

Later, after getting a personal computer (a Macintosh 128K) I discovered some inexpensive software engineering tools that were available for that platform. These tools were able to create Data Flow Diagrams (DFDs) and State Transition Diagrams (STDs). At that time, I had moved from being a software developer into project management (PM) and systems engineering (SE). So, I tried to apply these software engineering tools to my systems problem, but they never seemed to satisfy the needs of the SE and PM roles I was undertaking.

In 1989, I was introduced to a different methodology in the RDD-100 tool. It contained the places to capture my information and (automatically) produced diagrams from the information. Or I could use the diagrams to capture the information as well. All of a sudden, I had a language that really met my needs. Later CORE applied a modified version of this language and became my tool of choice. The only problem was no one had documented the language and went to the effort of making it a standard, so arguments abounded throughout the community.

In subsequent years I watched systems engineers argue between functional analysis and object-oriented methods. The UML camp was pushing object-oriented tools, such as Rational Rose, and the functional analysis tools, such as CORE. We used both on a very major project for the US Army (yes that one) and customers seems to understand and like the CORE approach better (from my perspective). On other programs, I found a number of people using a DoD Architecture Framework (DoDAF) tool called Popkin Systems Architect, which was later procured by IBM (and more recently sold off to another vendor). Popkin included two types of behavioral diagrams: IDEF0s and DFDs. IDEF0 was another software development language that was adopted by systems engineers while software developed had moved on to the object-oriented computer and modeling languages.

I hope you can now see the pattern: software engineers develop and use a language, which is later picked up by the systems engineering community; usually at a point where its popularity in the software world is declining. The systems engineering community eventually realizes the problems with that language and moves on. However, one language has endured. That is the underlying language represented in RDD-100 and CORE. It can trace its roots back to the heyday of TRW in the 1960s. That language was invented and used on programs that went from concept development to initial operational capability (IOC) in 36 months. It was used for both the systems and software development.

But the problem was, as noted above, there was no standard. A few of us realized the problems we had encountered in trying to use these various software engineering languages and wanted to create a simpler standard for use in both SE and PM. So, in 2012 a number of us got together and developed the Lifecycle Modeling Language (LML). It was based on work SPEC Innovations has done previously and this group validated and greatly enhanced the language. The committee “published” LML in an open website (www.lifecyclemodeling.org) so it could be used by anyone. But I knew even before the committee started that the language could not be easily enhanced without it being instantiated in a software tool. So, in parallel, SPEC created Innoslate®. Innoslate (pronounced “In-no-Slate”) provided the community with a tool to test and refine the language and to use it to map to other languages, including the DoDAF MetaModel 2.0 (DM2) and in 2014 SysML (included in LML v. 1.1.). Hence, LML provides a robust ontology for SysML (and UML) today. But it goes far beyond SysML. Innoslate has proven that many different diagram types (over 27 today) can be generated from the ontology, including IDEF0, N2, and many other forms of physical and behavioral models.

Someone else at the SE Forum I attended last week said something also insightful. They talked about SysML as the language of today and that “10 years from now there may be something different.” That future language can and should be LML. To quote George Allen (the long-departed coach of the LA Rams and Washington Redskins): “The Future is Now!”

 

Why MBSE Still Needs Documents

A lot of people are pushing Model-Based Systems Engineering (MBSE) in a way to just deliver models … and by models they mean drawings. The drawings can and should meet the criteria provided by the standards, be it SysML, BPMN, or IDEF. But ultimately as systems engineers we are on the hook to deliver documents. These documents (specifications) form the basis for contracts and thus have significant legal ramifications. If the specifier uses a language that everyone does not understand and only supplies drawing in the model they deliver, confusion will reign supreme. Even worse, if the tool does not enforce the standards and allows users to put anything on the diagram, then all bets are off. You can imagine that the lawyers salivate over this kind of situation.

But it’s even worse really, because not only are diagram standards routinely ignored, but so are other best practices, such as including a unique number on every entity in the database or a description of each entity. As simple as this sounds, most people ignore doing these simple things until later, if ever. This leads us to our first question:  1) Is a model a better method to specify a system?

This question requires us to look at the underlying assumption behind delivering models vs. a document. The underlying assumption is that the model provides a better communication of the complete thoughts behind the design so that the specification is easier to understand and execute. Which leads us to the next question: 2) Can a document provide the same thing?

Not if we use standard office software to produce the document. The way it is commonly done today is that someone writes up a document in a tool like MS Word and then that files is shipped around for everyone to comment on (using track changes naturally) and then all the comments are adjudicated in a “Comment Matrix.” Once that document is completed someone converts it to PDF (a simple “Save as …” in MS Word). In the worst case, someone prints the document and scans it into a PDF. Now we have lost all traceability or even the ability to hyperlink portions of the information to other parts of the design, making requirements traceability very difficult.

However, if you author your document in a tool like Innoslate, you can use its Documents View to create the document as entities in the database. You can link the individual entities using the built-in or user created relationships to trace to other database entities, such as the models in the Action Diagram, or Test Cases. This provides traceability to both a document and the models. In fact, the diagrams in Innoslate can be embedded in the document as well, thus keeping it live, reducing the configuration management problem inherent in the standard approach.

MBSE doesn’t mean the end of documents but using models to analyze data and create more informative documents. Using a tool like Innoslate lets you have the best of both worlds: documents and models in one complete, integrated package.

It’s Time for Government to Embrace the Cloud Computing Revolution

We are sometimes our own worst enemies! We want something, but at the same time put up barriers to obtain what we want. A perfect example was at an Industry Day I recently attended. The customer had put out a request for information (RFI) and was holding a day to present what was going on with the program to the potential contractors. No procurement was discussed, only information about how they wanted to implement model-based systems engineering (MBSE). In particular they wanted to know what kind of contacting language should be used to provide better requests for proposals (RFP). However, they also said that we could not have one-on-one technical conversations with the government technical personnel. I call that a “self-inflicted denial of service attack.”

Cloud computing is the most common self-inflicted denial of service we encounter. We are all familiar now with DNS (Domain Name System) attacks. They seem to be a frequent occurrence and it’s frustrating when we can’t get on our favorite website because a troll has attacked it.

Because of these trolls and all their attack vectors, many in government have resisted adopting cloud computing. They think: “clouds are dangerous … I don’t have control over my data … someone might steal it.” All the while, their corporate networks have been hacked by every major player in the world. If someone hacks into your corporate network, everything they get is related to your organization and what it does. In other words, everything they get is gold. But isn’t cloud computing, as provided by large providers like Amazon, Google, and Microsoft, more secure than your corporate networks?

Let’s take Google for example. First, they don’t tell anyone the location of their data centers. They provide complete physical security. They build all their own servers from scratch and destroy them when they have finished their useful life. They have all the firewalls and software detection capabilities needed and more. They encrypt the data at rest (and you should be sending encrypted data via HTTP, at least). They randomize the filenames, so you need a map to find anything. The meet and exceed the FedRAMP requirements.

Does your corporate (or government) network do all that? Probably not. An Amazon Web Services’ representative explained to me, “FedRAMP requires over 200 security controls, we have over 2,000 of them.” The last thing anyone from these major “public” cloud providers want is some hacker successfully penetrating their network and capturing critical user data. They could (and would) be sued.

I was talking to a gentleman from the government about cloud computing the other day and he told me, “No one has ever told me how they can clean up a spill on the cloud.” [For those not in the know, a “spill” is when you accidentally put information somewhere it doesn’t belong.] I did not have the presence of mind at the time, but I should have asked “what do you do now with your enterprise e-mail system?” I can guarantee they do not go around tracking down backup and destroying hard drives. Deleting the data results in it being written over hundreds of times in a matter of minutes.

So, it’s time to stop committing denial of service attacks on ourselves. It’s time to embrace the cloud computing revolution and get on-board. The commercial world already did this for the most part half a decade ago. If we want to speed up and improve government, they need to figure out how to use the cloud now.

How to Choose the Right MBSE Tool

Find the Model-Based Systems Engineering Tool for Your Team

A model-based systems engineering tool can provide you with accuracy and efficiency. You need a tool that can help you do your job faster, better, and cheaper. Whether you are using legacy tools like Microsoft Office or are looking for a MBSE tool that better fits your team, here are some features and capabilities you should consider.

Collaboration and Version Control

It’s 2018. The MBSE tool you are looking at should definitely have built in collaboration and version control. You want to be able to communicate quickly and effectively with your team members and customers. Simple features such as a chat system and comment fields are a great start. Workflow and version control are more complex features but very effective. Workflow is a great feature for a program manager. It allows the PM to design a process workflow for the team that sends out reminders and approvals. Version control lets users work together simultaneously on the same document, diagram, etc. If you are working in a team of 2+ people, you need a tool with version control. Otherwise you will waste a lot of time waiting for a team member to finish the document or diagram before you can work on it.

Built in Modeling Languages Such as LML, SysML, BPML, Etc.

Most systems engineers need to be able to create uniformed models. LML encompasses the necessary aspects of both SysML and BPML. If you would like to try a simpler modeling language for complex systems, LML is a great way to do that. A built in modeling language allows you to make your models correct and understandable to all stakeholders.

Executable Models

A MBSE tool needs to be much more than just a drag and drop drawing tool; the models need to be executable. Executable models ensure accurate processes through simulation. Innoslate’s activity diagram and action diagram are both executable through the discrete event and Monte Carlo simulators. With the discrete event simulator, you will not only be able to see your process models execute, but you will able to see the total time, costs, resources used, and slack. The Monte Carlo simulator will show you the standard deviation of your model’s time, cost, and resources.

Easy to Learn

It can take a lot of time and money to learn a new MBSE tool. You want a relatively short learning curve. First, look for a tool that has an easy user interface. A free trial, sandbox, or account to get started with is a major plus. This let’s you get a good feel for how easy the tool is to learn. Look for tools that provide free online training. It’s important that the tool provider is dedicated to educating their users. They should have documentation, webinars, and free or included support.

Communicates Across Stakeholders

Communication in the system/product lifecycle is imperative. Most of us work on very diverse teams. Some of us have backgrounds in electrical engineering or physics or maybe even business. You need to be able to communicate across the entire lifecycle. This means the tool should have classes that meet the needs of many different backgrounds, such as risk, cost, decisions, assets, etc. A tool that systems engineers, program managers, and customers can all understand is ideal. The Lifecycle Modeling Language (LML) is a modeling language designed to meet all the stakeholder needs.

Full Lifecycle Capability

A tool with full lifecycle capability will save you money and time. If you don’t choose a tool with all the features needed for the project’s lifecycle, you will have to purchase several different tools. Each of those tools can cost the same amount as purchasing just one full lifecycle MBSE tool. You will also have to spend money on more training since you will not be able to do large group training. Most tools do not work together, so you will have spend resources on integrating the different tools. This causes the overall project to cost a lot more. This is why Innoslate is a full lifecycle MBSE solution.

 

It’s important to find the tool that is right for your project and your team. These are just helpful guidelines to help you find the right tool for you. You might need to adjust some of these guidelines for your specific project. If you would like to see if Innoslate is the right tool for your project, get started with it today or call us to see if our solution is the good fit for you.

 

Why Do We Need Model-Based Systems Engineering?

MBSE is one of the latest buzzwords to hit the development community.

The main idea was to transform the systems engineering approach from “document-centric” to “model-centric.” Hence, the systems engineer would develop models of the system instead of documents.

But why? What does that buy us? Switching to a model-based approach helps: 1) coordinate system design activities; 2) satisfy stakeholder requirements; and 3) provide a significant return on investment.

Coordinating System Design Activities

The job of a systems engineer is in part to lead the system design and development by working with the various design disciplines to optimize the design in terms of cost, schedule, and performance. The problem with letting each discipline design the system without coordination is shown in the comic.

If each discipline optimized for their area of expertise, then the airplane (in this case) would never get off the ground. The systems engineer works with each discipline and balances the needs in each area.

MBSE can help this coordination by providing a way to capture all the information from the different disciplines and share that information with the designers and other stakeholders. Modern MBSE tools, like Innoslate, provide the means for this sharing, as long as the tool is easy for everyone to use. A good MBSE tool will have an open ontology, such as the Lifecycle Modeling Language (LML); many ways to visualize the information in different interactive diagrams (models); ability to verify the logic and modeling rules are being met; and traceability between all the information from all sources.

Satisfying Stakeholder Requirements

Another part of the systems engineers’ job is to work with the customers and end-users who are paying for the product. They have “operational requirements” that must be satisfied so that they can meet their business needs. Otherwise they will no longer have a business.

We use MBSE tools to help us analyze those requirements and manage them to ensure they are met at the end of the product development. As such, the systems engineer becomes the translator from the electrical engineers to the mechanical engineers to the computer scientists to the operator of the system to the maintainer of the system to the buyer of the system. Each speaks a different language. The idea of using models was a means to provide this communications in a simple, graphical form.

We need to recognize that many of the types of systems engineering diagrams (models) do not communicate to everyone, particularly the stakeholders. That’s why documents contain both words and pictures. They communicate not only the visual but explain the visual image to those who do not understand it. We need an ontology and a few diagrams that seem familiar to almost anyone. So, we need something that can model the system and communicate well with everyone.

Perhaps the most important thing about this combined functional and physical model is it can be tested to ensure that it works. Using discrete event simulation, this model can be executed to create timelines, identify resource usage, and cost. In other words, it allows us to optimize cost, schedule, and performance of the system through the model. Finally, we have something that helps us do our primary job. Now that’s model-based systems engineering!

Provides a Significant Return on Investment

We can understand the idea of how systems engineering provides a return on investment from the graph.

The picture shows what happens when we do not spend enough time and money on systems engineering. The result is often cost overruns, schedule slips, reduced performance, and program cancellations. Something not shown on the graph, since it is NASA-related data for unmanned satellites, is the potential loss of life due to poor systems engineering.

MBSE tools help automate the systems engineering process by providing a mechanism to not only capture the necessary information more completely and traceably, but also verify that the models work. If those tools contain simulators to execute the models and from that execution provide a means to optimize cost, schedule, and performance, then fewer errors will be introduced in the early, requirements development phase. Eliminating those errors will prevent the cost overruns and problems that might not be surfaced by traditional document-centric approaches.

Another cost reduction comes from conducting model-based reviews (MBRs). An MBR uses the information within the tool to show reviewers what they need to ensure that the review evaluation criteria are met. The MBSE tool can provide a roadmap for the review using internal document views and links and provide commenting capabilities so that the reviewers’ questions can be posted. The developers can then use the tool to answer those comments directly. By not having to print copies of the documentation for everyone for the review, and then consolidate the markups into a document for adjudication, we cut out several time-consuming steps, which reduce the labor cost of the review an order of magnitude. This MBR approach can reduce the time to review and respond to the review from weeks to days.

Bottom-line

The purpose for “model-based” systems engineering was to move away from being “document-centric.” MBSE is much more than just a buzzword. It’s an important application that allows us to develop, analyze, and test complex systems. We most importantly need MBSE because it provides a means to coordinate system design activity, satisfies stakeholder requirements and provides a significant return on investment.  The “model-based” technique is only as good the MBSE tool you use, so make sure to choose a good one.

How to Use Innoslate to Perform Failure Modes and Effects Criticality Analysis

“Failure Mode and Effects Analysis (FMEA) and Failure Modes, Effects and Criticality Analysis (FMECA) are methodologies designed to identify potential failure modes for a product or process, to assess the risk associated with those failure modes, to rank the issues in terms of importance and to identify and carry out corrective actions to address the most serious concerns.”[1]

FMECA is a critical analysis required for ensuring viability of a system during operations and support phase of the lifecycle. A major part of FMECA is understanding the failure process and its impact on the operations of the system. The figure below shows an example of how to model a process to include the potential of failure. Duration attributes, Input/Output, Cost and Resource entities can be added to this model and simulated to begin estimating metrics. You can use this with real data to understand the values of existing systems or derive the needs of the system (thresholds and objectives) by including this kind of analysis in the overall system modeling.

action diagram fmea

Step one is to build this Action Diagram (for details on how to do this please reference the Guide to Model-Based Systems Engineering. Add a loop to periodically enable the decision on whether or not a failure occurs. The time between these decisions can be adjusted by the number of iteration of the loop and the duration of the “F.11 Continue Normal Operations” action.

Adjust the number of iterations by selecting the loop action (“F.1 Continue to operate vehicle?”) and press the </>Script button (see below). A dialog appears asking you to edit the action’s script. You can use the pull-down menu to select Loop Iterations, Custom Script, Probability (Loop), and Resource (Loop). In this case, select “Loop Iterations.” The type in the number (choose 100) as see in the figure below.

Next change the duration of this action and the F.11. Since the loop decision is not a factor in this model, you can give it a nominally small time (1 minute as shown). For the “F.11 Continue Normal Operations” choose 100 hours. When combined with the branch percentage of this path of 90%, means that we have roughly 900 operating hours between failures, which is not unusual for a vehicle in a suburban environment. We could provide a more accurate estimate, including using a distribution for the normal operating hours.

The 90% branch probability comes from the script for the OR action (“F.2 Failure?”). That selection results in the dialog box below.

Now if you assume a failure occurs approximately 10% of the time you can then determine the failure modes are probabilistic in nature, the paths need to be selected based on those probabilities. The second OR action (“F.3 Failure Mode?) shows three possible failure modes. You can add more by selecting F.3 and using the “+Add Branch” button. You can use this to add more branches to represent other failure modes, such as “Driver failure,” “Hit obstacle,” “Guidance System Loss,” etc.

Note to change the default names (Yes, No, Option) to the names of the failure modes, just double click on the name and a dialog will pop-up (as on right). Just type in the name you prefer.

To finish off this model add durations to the various other actions that may result from the individual failures. The collective times represent the impact of the failure on the driver’s time. Since you do not have any data at this time for how long each of these steps would take, just estimate them by using Triangular distributions of time (see sidebar below).

This shows an estimate from a minimum of ½ hour to a maximum of 1 hour, with the mean being ¾ hour. If you do this for the other actions, you can now execute the model to determine the impacts on time.

Note, you could also accumulate costs by adding a related Cost entity to each of the actions. Simply create an overall cost entity (e.g., “Failure Costs” and then decompose it by the various costs of the repairs. Then you can assign the costs to the actions by using a Hierarchical Comparison matrix. Select the parent process action (“F Vehicle Failure Process”) and use the Open menu to select the comparison matrix (at bottom of the menu). Then you will see a sidebar that asks for the “Target Entity,” which is the “Failure Costs” you just created. Then select the “Target Relationship,” which is only one “incurs” between costs and actions, then push the blue “Generate” button to obtain the matrix. Select the intersections of the between the process steps and the costs. This creates the relationships in between the actions and the costs. The result is shown below.

hiearchical comparison matrix

If you have not already added the values of the costs, you can do it from this matrix. Just select one of the cost entities and its attributes show up on the sidebar (see below).

Note how you can add distributions here as well.

Finally, you want to see the results of the model. Execute the model using the discrete event and Monte Carlo Simulators. To access these simulators, just select “Simulate” from the Action Diagram for the main process (“F Vehicle Failure Process). You can see the results of a single discrete event simulation below. Note that the gray boxes mean that those actions were never executed. They represent the rarer failure mode of an engine failure (assume that you change your oil regularly or this would occur much more often).

To see the impact of many executions by using the Monte Carlo simulator. The results of this simulation for 1000 runs is shown below.

As a result, you can see that for about a year in operation, the owner of this vehicle can expect to spend an average of over $1560. However, you could spend as much as over $3750 in a bad year!

For more detailed analysis, you can use the “CSV Reports” to obtain the details of these runs.

[1] From http://www.weibull.com/hotwire/issue46/relbasics46.htm accessed 1/18/2017

Developing Requirements from Models – Why and How

One of the benefits of having an integrated solution for requirements management and model-based systems engineering is you can easily develop requirements from models. This is becoming an increasingly used practice in the systems engineering community. Often times as requirements managers we are given the task of updating or developing an entirely new product or system. A a great place to start in this situation is to create two models a current model and a future (proposed) model. This way you can predict where the problems are in the current systems and develop requirements from there. Innoslate has an easy way to automatically generate requirements documents from models. Below we’ll take a well known example from the aerospace industry, The FireSAT model,  to show you how you can do this.

The diagram below shows the top level of the wildfire detection and alerting system. Fires are detected and then alerts are sent. Each of these steps are then decomposed in more detail. The decomposition can be continued until most aspects of the problem and mechanisms for detection and alerting have been identified. If timing and resources are added, this model can predict where the problems are in the current system. This model can show you that most fires are detected too late to be put out before consuming large areas of forests and surrounding populated areas.

One system proposed to solve this is a dedicated satellite constellation (FireSAT ) that would detect wildfires early and alert crews for putting them out. The same system could also aid in monitoring on-going wildfires and aid in the fire suppression and property damage analysis. Such a system could even provide this service worldwide. The proposed system for the design reference mission is shown below.

The “Perform Normal Ops” is the only one decomposed, as that was the primary area of interest for this mission, which would be a short-term demonstration of the capability. Let’s decompose this step further.

Now we have a decomposition of the fire model, warning system, and response.. The fire model and response were included to provide information about the effectiveness of such a capability. The other step provides the functionality required to perform the primary element of alerting. This element is essentially the communications subsystem of the satellite system (which includes requirements for ground systems as well as space systems).

Innoslate allows you to quickly obtain a requirements document for that subsystem. The document, in the Requirements View looks like the picture below.

This model is just for a quick example, but you can see that it contains several functional requirements. This document, once the model is complete, can then provide the basis for the Communications Subsystem Requirements.

 

If you’d like to see another example of how to do this, watch our Generating Requirements video for the autonomous vehicle example.