How to Choose the Right MBSE Tool

Find the Model-Based Systems Engineering Tool for Your Team

A model-based systems engineering tool can provide you with accuracy and efficiency. You need a tool that can help you do your job faster, better, and cheaper. Whether you are using legacy tools like Microsoft Office or are looking for a MBSE tool that better fits your team, here are some features and capabilities you should consider.

Collaboration and Version Control

It’s 2018. The MBSE tool you are looking at should definitely have built in collaboration and version control. You want to be able to communicate quickly and effectively with your team members and customers. Simple features such as a chat system and comment fields are a great start. Workflow and version control are more complex features but very effective. Workflow is a great feature for a program manager. It allows the PM to design a process workflow for the team that sends out reminders and approvals. Version control lets users work together simultaneously on the same document, diagram, etc. If you are working in a team of 2+ people, you need a tool with version control. Otherwise you will waste a lot of time waiting for a team member to finish the document or diagram before you can work on it.

Built in Modeling Languages Such as LML, SysML, BPML, Etc.

Most systems engineers need to be able to create uniformed models. LML encompasses the necessary aspects of both SysML and BPML. If you would like to try a simpler modeling language for complex systems, LML is a great way to do that. A built in modeling language allows you to make your models correct and understandable to all stakeholders.

Executable Models

A MBSE tool needs to be much more than just a drag and drop drawing tool; the models need to be executable. Executable models ensure accurate processes through simulation. Innoslate’s activity diagram and action diagram are both executable through the discrete event and Monte Carlo simulators. With the discrete event simulator, you will not only be able to see your process models execute, but you will able to see the total time, costs, resources used, and slack. The Monte Carlo simulator will show you the standard deviation of your model’s time, cost, and resources.

Easy to Learn

It can take a lot of time and money to learn a new MBSE tool. You want a relatively short learning curve. First, look for a tool that has an easy user interface. A free trial, sandbox, or account to get started with is a major plus. This let’s you get a good feel for how easy the tool is to learn. Look for tools that provide free online training. It’s important that the tool provider is dedicated to educating their users. They should have documentation, webinars, and free or included support.

Communicates Across Stakeholders

Communication in the system/product lifecycle is imperative. Most of us work on very diverse teams. Some of us have backgrounds in electrical engineering or physics or maybe even business. You need to be able to communicate across the entire lifecycle. This means the tool should have classes that meet the needs of many different backgrounds, such as risk, cost, decisions, assets, etc. A tool that systems engineers, program managers, and customers can all understand is ideal. The Lifecycle Modeling Language (LML) is a modeling language designed to meet all the stakeholder needs.

Full Lifecycle Capability

A tool with full lifecycle capability will save you money and time. If you don’t choose a tool with all the features needed for the project’s lifecycle, you will have to purchase several different tools. Each of those tools can cost the same amount as purchasing just one full lifecycle MBSE tool. You will also have to spend money on more training since you will not be able to do large group training. Most tools do not work together, so you will have spend resources on integrating the different tools. This causes the overall project to cost a lot more. This is why Innoslate is a full lifecycle MBSE solution.


It’s important to find the tool that is right for your project and your team. These are just helpful guidelines to help you find the right tool for you. You might need to adjust some of these guidelines for your specific project. If you would like to see if Innoslate is the right tool for your project, get started with it today or call us to see if our solution is the good fit for you.


Why Do We Need Model-Based Systems Engineering?

MBSE is one of the latest buzzwords to hit the development community.

The main idea was to transform the systems engineering approach from “document-centric” to “model-centric.” Hence, the systems engineer would develop models of the system instead of documents.

But why? What does that buy us? Switching to a model-based approach helps: 1) coordinate system design activities; 2) satisfy stakeholder requirements; and 3) provide a significant return on investment.

Coordinating System Design Activities

The job of a systems engineer is in part to lead the system design and development by working with the various design disciplines to optimize the design in terms of cost, schedule, and performance. The problem with letting each discipline design the system without coordination is shown in the comic.

If each discipline optimized for their area of expertise, then the airplane (in this case) would never get off the ground. The systems engineer works with each discipline and balances the needs in each area.

MBSE can help this coordination by providing a way to capture all the information from the different disciplines and share that information with the designers and other stakeholders. Modern MBSE tools, like Innoslate, provide the means for this sharing, as long as the tool is easy for everyone to use. A good MBSE tool will have an open ontology, such as the Lifecycle Modeling Language (LML); many ways to visualize the information in different interactive diagrams (models); ability to verify the logic and modeling rules are being met; and traceability between all the information from all sources.

Satisfying Stakeholder Requirements

Another part of the systems engineers’ job is to work with the customers and end-users who are paying for the product. They have “operational requirements” that must be satisfied so that they can meet their business needs. Otherwise they will no longer have a business.

We use MBSE tools to help us analyze those requirements and manage them to ensure they are met at the end of the product development. As such, the systems engineer becomes the translator from the electrical engineers to the mechanical engineers to the computer scientists to the operator of the system to the maintainer of the system to the buyer of the system. Each speaks a different language. The idea of using models was a means to provide this communications in a simple, graphical form.

We need to recognize that many of the types of systems engineering diagrams (models) do not communicate to everyone, particularly the stakeholders. That’s why documents contain both words and pictures. They communicate not only the visual but explain the visual image to those who do not understand it. We need an ontology and a few diagrams that seem familiar to almost anyone. So, we need something that can model the system and communicate well with everyone.

Perhaps the most important thing about this combined functional and physical model is it can be tested to ensure that it works. Using discrete event simulation, this model can be executed to create timelines, identify resource usage, and cost. In other words, it allows us to optimize cost, schedule, and performance of the system through the model. Finally, we have something that helps us do our primary job. Now that’s model-based systems engineering!

Provides a Significant Return on Investment

We can understand the idea of how systems engineering provides a return on investment from the graph.

The picture shows what happens when we do not spend enough time and money on systems engineering. The result is often cost overruns, schedule slips, reduced performance, and program cancellations. Something not shown on the graph, since it is NASA-related data for unmanned satellites, is the potential loss of life due to poor systems engineering.

MBSE tools help automate the systems engineering process by providing a mechanism to not only capture the necessary information more completely and traceably, but also verify that the models work. If those tools contain simulators to execute the models and from that execution provide a means to optimize cost, schedule, and performance, then fewer errors will be introduced in the early, requirements development phase. Eliminating those errors will prevent the cost overruns and problems that might not be surfaced by traditional document-centric approaches.

Another cost reduction comes from conducting model-based reviews (MBRs). An MBR uses the information within the tool to show reviewers what they need to ensure that the review evaluation criteria are met. The MBSE tool can provide a roadmap for the review using internal document views and links and provide commenting capabilities so that the reviewers’ questions can be posted. The developers can then use the tool to answer those comments directly. By not having to print copies of the documentation for everyone for the review, and then consolidate the markups into a document for adjudication, we cut out several time-consuming steps, which reduce the labor cost of the review an order of magnitude. This MBR approach can reduce the time to review and respond to the review from weeks to days.


The purpose for “model-based” systems engineering was to move away from being “document-centric.” MBSE is much more than just a buzzword. It’s an important application that allows us to develop, analyze, and test complex systems. We most importantly need MBSE because it provides a means to coordinate system design activity, satisfies stakeholder requirements and provides a significant return on investment.  The “model-based” technique is only as good the MBSE tool you use, so make sure to choose a good one.

Developing Requirements from Models – Why and How

One of the benefits of having an integrated solution for requirements management and model-based systems engineering is you can easily develop requirements from models. This is becoming an increasingly used practice in the systems engineering community. Often times as requirements managers we are given the task of updating or developing an entirely new product or system. A a great place to start in this situation is to create two models a current model and a future (proposed) model. This way you can predict where the problems are in the current systems and develop requirements from there. Innoslate has an easy way to automatically generate requirements documents from models. Below we’ll take a well known example from the aerospace industry, The FireSAT model,  to show you how you can do this.

The diagram below shows the top level of the wildfire detection and alerting system. Fires are detected and then alerts are sent. Each of these steps are then decomposed in more detail. The decomposition can be continued until most aspects of the problem and mechanisms for detection and alerting have been identified. If timing and resources are added, this model can predict where the problems are in the current system. This model can show you that most fires are detected too late to be put out before consuming large areas of forests and surrounding populated areas.

One system proposed to solve this is a dedicated satellite constellation (FireSAT ) that would detect wildfires early and alert crews for putting them out. The same system could also aid in monitoring on-going wildfires and aid in the fire suppression and property damage analysis. Such a system could even provide this service worldwide. The proposed system for the design reference mission is shown below.

The “Perform Normal Ops” is the only one decomposed, as that was the primary area of interest for this mission, which would be a short-term demonstration of the capability. Let’s decompose this step further.

Now we have a decomposition of the fire model, warning system, and response.. The fire model and response were included to provide information about the effectiveness of such a capability. The other step provides the functionality required to perform the primary element of alerting. This element is essentially the communications subsystem of the satellite system (which includes requirements for ground systems as well as space systems).

Innoslate allows you to quickly obtain a requirements document for that subsystem. The document, in the Requirements View looks like the picture below.

This model is just for a quick example, but you can see that it contains several functional requirements. This document, once the model is complete, can then provide the basis for the Communications Subsystem Requirements.


If you’d like to see another example of how to do this, watch our Generating Requirements video for the autonomous vehicle example.

Move Past Spreadsheets with Modern Requirements Management

Are you still using Microsoft Office to capture, manage, analyze, and trace all your requirements? Products and systems increase in complexity every day. You need a requirements management tool that can properly handle large complex projects.

When you use spreadsheets for requirements management you increase your time to market Even worse not using a modern requirements management solution can result in a higher risk of product failure.  A CIO report found that “as many as 71 percent of software projects that fail do so because of poor requirements management.”  Poor requirements management occurs when teams use antiquated RM tools that do not have the needed traceability, collaboration, and quality analysis features.

Traceability needs to happen through the entire process. It’s much simpler to get full project traceability if you can map your process in the same place you create your requirements. That’s why more and more companies are looking at robust solutions for their requirements management. Solutions like Innoslate, that have built in collaboration features, traceability, test processes, system processes, and more. Innoslate has the benefit of being a full lifecycle tool. You can start with requirements management, develop a process for the product and system, and then verify and validate that the process meets the requirements.

Modern requirements tools should be able to  trace between requirements and other classes and get reports such as the RTM, RSM, and RVM. In Innoslate, you can actually use the Test Center feature as well, and you can even trace the requirements to your verification actions (Test Case) and create a complete RVTM.

Another major problem with using spreadsheets is that teams can barely communicate with each other. It becomes difficult to keep files updated. Files are often shared between people using the same tools, but cross sharing isn’t really possible. Large teams with large complex requirements need to be able to communicate effectively. Cloud RM tools provide the ability to collaborate and keep information accurate. You can also look for on-premise solutions that offer your team collaboration, but still meet your security needs. Innoslate offers the ability to work collaboratively throughout the entire project. With Innoslate you can communicate quickly via chat and comments and keep a record of your conversation. Version control allows team members to work together on the same requirements document saving you time and reducing errors.

Of course, with all these collaboration features you need strong program management controls. A program manager can see every change made to a requirement and the team member that made the change. He or she can then revert back to older versions. Baselining allows you to see changes throughout the entire history of the document. With permissions, you can determine which team members can have owner, read/write, or read/only privileges to your project. Branching and forking provides even stricter controls, allowing the program manager to split off certain sections of a project to different groups. From there the program manager can decide which changes to accept back into the main project.

Spreadsheets were not specifically designed for capturing, managing, and analyzing requirements. Microsoft Office’s spell check was built to help maintain proper grammar and spelling. However, Innoslate has a quality analysis feature that can look for mistakes specific to requirements. Writing multiple requirements into one can make verification impossible or writing requirements that aren’t specific enough. These mistakes are costly and can result in poor requirements management. Innoslate can improve your entire requirements document by finding these mistakes for you.

It’s important to find a modern solution that can allow you to move past spreadsheets with traceability, collaboration, and quality analysis.

Watch “Move Past Spreadsheets with Modern Requirements Management” webinar.

How to Make Requirements Approvals Using Innoslate

One question we often get is: does Innoslate do workflow? What people mostly mean by this question is how can they approve documents and requirements and be notified that that approval has occurred? How can we be sure the right person approved the item? Innoslate provides this capability using what we call “lightweight workflow.”

Lightweight work flow follows a simple process using the notifications, labels, history, and baselining features of Innoslate. The steps of that process are provided below.

Step 1: Setting up a signature block.

Most requirements approval occur at a document level. An example would be a standard operating procedure (SOP) for a project. The set of requirements are embedded in the SOP document. Innoslate provides templates for SOPs that can be used to develop them. You only need to access the SOP document (Menu/SOPs as seen below).


You can then select a blank document or the Standard Operating Procedure (With Guidelines) and the template will be produced.

In this template, there is a signature block. These can easily be separated in to separate blocks or you can break them out individually.

The screen shot below shows an example of breaking out the signature blocks. We have already added the information about the signatories of the SOP (Project Manager, Program Manager, and Division Manager).

Each manager may want to subscribe to one or more of these block for notification when it has been changed. Subscribing to an entity is easy and available from Entity View. Just select the entity and open the Entity View, as shown below.

In the Entity View of the signature of interest, select the “More” button and “Follow” the entity. The “Follow” tells Innoslate that you want to receive a notification anytime this entity is changed. When a change is made an e-mail will be sent to your e-mail address.

So, once the Project Manager approves of this entity, by adding the “Approved” label (note you may have to add this label as its not part of the standard label set), and the initials or a picture of the person’s signature can also be added at the same time, then an e-mail will be sent out to anyone subscribed to the entity. The e-mail below shows the notification I received from Innoslate.

Going back to the SOP and selecting the Project Manager entity, we see that the approved was made (see below).

The time and date of this approval and who approved it is recorded in the history of that entity, which is accessed from the Entity View using the “History” button (see below).

If you want to ensure that no changes occur elsewhere in the document after each approval, you can use the baselining feature (see the “Baseline” button at the top of the Document’s View).

This action will freeze the changes and only allow changes to the next version.

So using these techniques, Innoslate provides all you need to ensure that the requirements documents you produce follow your workflow.

How to Import Complex Documents into Innoslate

One of the first things you want to do when using a requirements or PLM tool is to import complex documents into the tool, so that you can begin analysis. Most documents have pictures, tables and other elements of information that you want to be able to access. Often these complex documents come as an Adobe Portable Document Format (PDF) file; other times they come as Comma Separated Values, MS Word and other formats. In this blog, we will deal with PDFs.

When bringing in a new document, we recommend starting with a new Innoslate project. Also, most documents are a result of non-uniform word processing, which means that import software has to deal with many different possible numbering schemes and formats. This fact makes getting a document into a toll very difficult. Fortunately, Innoslate’s Import Analyzer provides the means to overcome most of this problem, but you may want to do a little bit of work on the file first.

PDFs come in two forms: 1) scanned documents; and 2) selectable documents. The first one requires the use of Optical Character Recognition (OCR) software. We recommend Google’s as it seems to be one of the best, but many other tools for OCR are available. This process converts it into a selectable document.

Once you have the document in an editable PDF format and you have a copy of the latest version of Adobe Acrobat Pro, you can try to save the document as an MS Word file. Adobe tends to do a very good job converting documents this way. You may still want to go through and clean up the document, such as removing the table of contents and other unnecessary information.

After you have the document in the format you want, select the Import Analyzer from the Menu and follow the process of using the “Word (.docx)” tab, selecting the class for import (Next), and dragging the file into the window for import (Step 1). The upload and analysis process may take a minute or so, depending on the size and complexity of the document (Step 2). The analysis includes the creation of parent-child relationships (decomposed by/decomposes) as identified by the numbering scheme.

Once the upload and analysis are complete, just select “Next” and you can see a preview of the information as it has been captured (Step 3). If satisfied, then select save the entities into the database.

Step 1:

Step 2:

Step 3:

The end result of this process is the document being seen in the Requirements View. The analyzer includes any pictures and tables, if they were properly developed that way in the original document (see below).



If you already have another project to which you want to add this one to, you can export and re-import the Innoslate XML file, or (better) use the branching/forking capability (go to Database View and use the “Branch” button). When creating a new branch, instead of selecting a “New Project” use the “Target” drop down menu to select the project with which you want the document to merge (see right).


The process above is a best-case situation for complex documents. Sometimes, this approach to importing fails due to problems in the MS Word document itself.

The second way to import a PDF file is to use the “Plain Text (.pdf, .txt, etc.)” tab in the analyzer (shown below).

Here you need to give the Artifact (the entity that will store the uploaded document) a name, which you can edit later. Again, select the class type for import (we default to Requirements, since that usually what you are importing). Finally, we need to select the type of list contained within the document, again for the purpose of creating the parent-child relationships.



After clicking the “Next>” button, you can paste the copied text from the file into the space provided. After clicking the Next button on that screen, the analysis proceeds and then you can preview the results as before.


Finally, the worst-case scenario is a PDF document that cannot be easily imported using any of the Import Analyzer tools. Although this is rare, it does occur. Recently I was asked to import a portion of the US Code. For anyone who has seen it, it’s double column and contains a lot of unusual characters. So, I determined that the fastest way to bring it into the tool was by cutting and pasting objects into the Requirements View of an Innoslate Project. I used this as an opportunity to conduct analysis on the document as I went. Since I was not under a tight deadline (I had days to perform the task, not hours) and we ultimately wanted to perform requirements analysis anyway, this let me take blocks of text and treat them as Statements, instead of Requirements, when they really only provided context or breakup paragraphs that contained multiple requirements into individual entities to they could be separately traced. That effort took a person day and one half, while the other ones above had taken really only minutes, but no analysis was done.


All-in-all Innoslate provides the means to bring any and all information from the outside into the tool. You can then use that information to complete the rest of the lifecycle within the same tool environment (with no plug-ins required).

Document Trees in Innoslate

Different levels of documents result from decomposition of user needs to component-level specifications, as shown in the figure below.

Innoslate enables the user to create such trees as a Hierarchy chart, which uses the “decomposed by” relationship” to show the hierarchy. An example is shown below.

Each of these Artifacts contain requirements at the different levels. Those requirements may be related to one another using the “relates from/relates to” relationship if they are peer-to-peer (i.e. at the same level of decomposition) or using the decomposed by relationship to indicate that they were derived from the higher-level requirement.

This approach allows you to reuse, rather than recreate requirements from a higher-level document. An example is shown in Requirements View below.


In this example, the top-level Enterprise Requirements were repurposed for the Mission Needs document (MN.1.1 and MN.1.2) and the System Requirements Document (SRD.5). If you prefer to keep the original numbers, you only have to Auto Number the ERD document using that button on the menu bar and the objects would show up with the ERD prefix in the lower documents. Note that in either case, the uploaded original document would retain the original numbers, in case you wanted to reference them that way. Also, each entity has a Universal Unique Identifier (UUID) that the requirement retains, if you prefer to use that as a reference.

This approach discussed above is only one way to accomplish the development of a document tree. Innoslate enables other approaches, such as using a new relationship (i.e. derived from/derives). Try it the way above and see if it meets your needs. If not, adjust as you like.

What is a PLM Tool?

Product Lifecycle Management (PLM) software integrates cost effective solutions to manage the useful life of a product.

PLM software tools allow you to:

  • Keep aligned with customer’s requirements
  • Optimize cost and resources through simulated risk analysis
  • Reduce complexity with a single interconnecting database
  • Improve and maintain quality of a product throughout the lifecycle

The areas of PLM include: five primary areas:

  1. Systems engineering
  2. Product and portfolio m² (PPM)
  3. Product design (CAx)
  4. Manufacturing process management (MPM)
  5. Product data management (PDM)

Let’s look at each area in more detail.

Systems Engineering

A PLM tool should  support the system engineer throughout the lifecycle by integrating requirements analysis and management (Requirements View and checker), with functional analysis and allocation (all the SysML diagrams, along with LML, IDEF0, N2, and others), with solution synthesis (SysML, LML, Layer Diagram, Physical I/O, etc.), test and evaluation (Test Plans, Test Center), and simulation (discrete event and Monte Carlo). Many PLM tools lack the combination of all these capabilities in this area. Innoslate® was made by systems engineers for systems engineers and is designed for the modern cloud environment that enables massive scalability and collaboration. No other PLM tool has Innoslate’s combination of capabilities in this area.

Product and Portfolio Management

PPM includes pipeline, resource, financial, and risk management, as well as change control. Innoslate® provides all these capabilities and a simple easy to use modeling diagram to capture the business processes, resource and cost load them, and then produce Gantt charts for the timeline. The Monte Carlo simulation also enables the exploration of the schedule and cost risks due to the variation of timing, costs, and resources. This approach is called Model-Based Program Management (MBPM); consider it an important adjunct to the systems engineering work.

Innoslate® also captures risks, decisions, and other program management data completely within the tool using Risk Matrices, and other diagrams. Change control is provided through the baselining of documents, the branching/forking capability, and the object-level history files.

Innoslate® provides a means to develop a program plan that can be linked to the diagrams and other information within the database This feature enables you to keep all your information and documents together in one place.

Product Design

The Product Design area focuses on the capability to capture and visualize product design information from analysis to concept to synthesis. The Asset Diagram enables the addition of pictures to the standard boxes and lines diagrams. This capability enables the development of the high-level concept pictures that everyone needs. Innoslate’s CAD viewer feature allows you to not only view the STL and OBJ files it can create the equivalent Asset Diagram entities through the OBJ file, but this feature also makes the integration between the two tools more seamless. Other physical views, such as the layer diagram and physical I/O help view the physical model in ways that usually required a separate drawing tool.

Manufacturing Process Management

Innoslate provides great process planning and resource planning capabilities using the Action Diagram and other features discussed above. Direct interface from Innoslate® to other tools can be accomplished using the software development kit (SDK) application programmer interfaces (APIs). If the MPM tools have Internet access, you can use the Zapier Integration capability, which provides an interface to over 750 tools, ranging from GitHub to PayPal to SAP Jam Collaboration. In addition, Innoslate is routinely used for Failure Modes and Effects Analyses (FMEA), which is critical to MPM.

Product Data Management

Capturing all the product data, such as the part number, part description, supplier/vendor, vendor part number and description, unit of measure, cost/price, schematic or CAD drawing, and material data sheets, can easily be accomplished using Innoslate. Most of the entities and attributes have already been defined in the baseline schema, but you can easily add more using the Schema Editor. You can develop a complete library of product drawings and data sheets by uploading electronic files can be uploaded as part of the “Artifacts” capture in the tool. Construction of a Work Breakdown Structure (WBS) and Bill of Materials (BOM) is also simply a standard report from the tool.

As a systems engineer, it’s important to allow information to drive your decisions. This can only be obtained through detailed functional analysis and an underlying scalable database. And is best accomplished with a PLM software tool that  encompasses all 5 areas illustrated above. Innoslate meets or excels in every area, so you are better equipped to face high-risk decisions.

Why Requirements Management Needs Analysis

When we talk to potential customers who are seeking requirements management tools, we begin with asking them: “What is your requirements management process?” Usually they say, “we get requirements from our customers and then trace them to our products.” In areas where the customer knows exactly what they need and have been doing it for many, many years, this might work. But in the fast paced businesses of today, that approach will miss a lot of the real requirements, gaps, and risk areas. Ignorance of these requirements usually means product failure or worse, loss of life. Your company is then liable for that failure and in the US that usually means a lawsuit. If you can’t prove that you did the necessary analyses, your company will usually lose the lawsuit.

If they say they do analysis on those original customer requirements we than ask: “How do you do requirements analysis?” Often the answer is: “Well, we read them and look for shall statements.” Again, this is insufficient and will lead to failure! A requirement must be:

  • Clear: Clear represents if this Requirement is unambiguous and not confusing.
  • Complete: Complete represents if this Requirement expresses a whole idea.
  • Consistent: Consistent represents if this Requirement is not in conflict with other requirements.
  • Correct: Correct represents if this Requirement describes the user’s true intent and is legally possible.
  • Design: Design represents if this Requirement does not impose a specific solution on design; says “what”, not “how”.
  • Feasible: Feasible represents if this Requirement is able to be implemented with existing technology, and within cost and schedule.
  • Modular: Modular represents if this Requirement can be changed without excessive impact on other requirements.
  • Traceable: Traceable represents if this Requirement is uniquely identified, and able to be tracked to predecessor and successor lifecycle items/objects.
  • Verifiable: Verifiable represents if this Requirement is provable (within realistic cost and schedule) that the system meets the requirement.

If they say they understand the need for complete requirements analysis, then we ask: “Did you know that functional analysis and modeling is a critical part of requirements analysis and management?” Again, most of the time the answer we receive is: “No, we don’t need that! All we need is a cheap tool that traces the requirements to product.” As you might expect this approach also leads to failure.

The Department of Defense (DoD) recognizes this problem. They define the purpose of requirements analysis as:

“To (1) obtain sets of logical solutions [functional architectures] derived from defined Technical Requirements, and (2) to better understand the relationships among them. Once these logical solution sets are formed, allocated performance parameters and constraints are set, and the set of Derived Technical Requirements for the system can be initially defined.

Outcomes: For each end product of a system model:

  1. A set of logical models showing relationships (e.g., behavioral, functional, temporal, data flows, etc.) among them
  2. A set of ‘design-to’ Derived Technical Requirements”

Most experts use a process like the one below:

Notice that functional analysis and allocation is in the center of this process. Other process models have this as well, but tend to bury it. That is a mistake. What functional analysis does is enable the analyst to create user stories (sometimes called use cases, scenarios, or threads) that can be used to validate the user requirements. As you conduct this analysis by creating a functional model, you can identify gaps and problem areas that the user may not have originally considered. You can also use these models to derive the performance requirements and identify other constraints using simulation.

The figure below shows a functional model of a process for fighting forest fires. It includes data entities and resources used in the operation. The rounded rectangular blocks represent the functional requirements for the operation. Even though some of these functions may currently be performed by people, many of them can be automated to enhance the performance of the overall system. Many of these functional requirements would be missing if you just asked people what the requirements were.

We can better understand the performance requirements by execution of this model in a simulation. An example of the results from a discrete event simulation is shown below:


These results show the time, cost and resources (water) used to put out the fire. By running these kinds of simulations, we can easily determine cost, schedule and performance metrics needed to accomplish the operation. Isn’t it better to know that early in the design phase rather than later?

The end result of performing these analyses is a much better set of requirements to build or buy what the users need. It will also build the case that you did the due diligence in this litigious society. That could mean the difference between a successful project and a failed project.