Quick Guide to Innoslate’s Ontology

Innoslate uses the Lifecycle Modeling Language (LML) ontology as the basis for the tool’s database schema. For those new to the word “ontology,” it’s simply the set of classes and relationships between them that form the basis for capturing the information needed. We look at this in a simple Entity-Relationship-Attribute (ERA) form. This formulation has a simple parallel to the way we look at most languages: entities represent nouns; relationships represent verbs; attributes on the entity represent adjectives; and attributes on relationships represent adverbs.

LML contains twelve (12) entity classes and eight (8) subclasses. They represent the basic elements of information needed to describe almost any system. The figure below shows how they can be grouped to create the models needed for this description.

Most of these entity classes have various ways to visualize the information, which are commonly called models or diagrams. The benefit of producing the visualizations using this ontology means that when you create one model, other models that use the same information will automatically have that information available.

All these entities are linked to one another through the relationships. The primary relationships are shown below.

 

This language takes a little getting used to, like any other language. For example, you might be used to referring to something functional as a Function or Activity. These are both “types” of Actions in LML and implemented as labels in Innoslate. Similarly, you may be used to using different relationship names for parents and children for different entity classes. However, by using the same verbs for the parent-child relationships you can avoid confusion in having to remember all the different verbs.

You still might need other ontological additions. LML was meant to be the “80% solution.” You should look very closely at the ontology, as often you only need to add types (labels) or an attribute here and there. Hopefully, you will rarely need to add new classes and relationships. If you do add new classes, try to do so as subclasses to existing ones, so that you inherit the diagrams as well. For example, when the Innoslate development team added the new Test Center, they decided they needed to extend the Action class. This enables the TestCase class to inherit the Action class and other functional diagrams, as well as the status, duration, and other attributes that were important.

Hopefully, you can see the benefits of using LML as the basis for Innoslate’s schema. It was designed to be:

  • Broad (covers the entire lifecycle – technical and programmatic)
  • Ontology-based (enables translation from LML to other languages and back)
  • All the capabilities of SysML (with LML v1.1 extensions) and DoDAF
  • Simple structure
  • Useful for stakeholders across the entire lifecycle

For more information, see www.lifeyclemodeling.org and visit the Help Center at help.innoslate.com.

Great Read “Enhancing MBSE with LML”

Review on “Enhancing Model-Based Systems Engineering with the Lifecycle Modeling Language” by Dr. Warren Vaneman

Dr. Vaneman’s paper on “Enhancing Model-Based Systems Engineering with the Lifecycle Modeling Language” provides a compelling justification for the need of a simpler, yet more complete, language, which integrates systems engineering with program management to support the entire systems lifecycle. It shows that the current LML standard 1.1 includes all the key features of the Systems Modeling Language (SysML), and thus can be used by people who practice systems engineering to generate the complete SysML diagram set.

This paper expresses the key goals of LML: “1) to be easy to understand; to be easy to extend; to support both the functional and object-oriented (O-O) approaches within the same design; 4) to be a language that can be understood by most system stakeholders, not just systems engineers; 5) To support the entire system’s lifecycle – cradle to grave; and 6) to support both evolutionary and revolutionary system changes to system plans and designs over the lifespan of the system.”

Dr. Vaneman covers three themes in the rest of the paper: 1) overview of legacy modeling and in introduction to LML; 2) comparison of SysML and LML using eight MBSE effectiveness measures; and 3) the potential to use LML as an ontology for SysML. Of particular interest was the comparison of SysML to LML. The major problem with SysML is the lack of an ontology, which makes it less expressive and precise. SysML seems to have problems in usability as well, due to the complexity of the diagraming notations.

Although a preliminary mapping of LML to SysML was done as part of the first release of the standard, the 1.1 version only had to be slightly modified to more fully visualize all the SysML diagrams. Only two new entity classes were defined (Equation and Port). Equation was developed to support the Parametric Diagram, which diagrams equations, and the Port, which is a subclass of Asset, was essential for a couple of the physical modeling diagrams.

I heartily agree with Dr. Vaneman’s conclusion that “LML provides a means to improve how we model system functionality to ensure functions are embedded in the design at the proper points and captured as part of the functional and physical requirements needed for design and test.”

You can read Warren Vaneman’s paper here:

IEEE MBSE Paper- Vaneman

More resources:

-LML website

Quick Guide to Innoslate’s Ontology

 

10 Qualities That Make a Good Systems Engineer

All systems engineers should have an understanding of basic concepts and a strong technical background, but these qualities go beyond just the necessities. From 40+ years of experience, I have found that a good systems engineer must have the following 10 qualities.

#1 Patience and Perseverance

To create a complicated system, an engineer must have a lot of patience and perseverance. The more complex the system the longer and more tedious a project it becomes. An engineer cannot figure out everything at once. It takes time to see the big picture, to look for all the small details. You will test and test and still find errors. You have to have patience to know that it takes time and determination to keep going after hundreds of failed attempts.

#2 Ability to Know When You Are Done

A good systems engineer wants their project to be flawless, but often it’s too easy to fall into a perfectionist trap. You tell yourself, “One more change and it will be perfect.” However, doing this may mean you never complete your project and all that hard work will become obsolete. The best engineers know when their system is good enough and when the system needs a little more re-engineering.

#3 An Analytical Brain

Most engineers are naturally analytical, which is probably why they were attracted to the field in the first place. From the moment they could talk, they were the ones that continually asked questions and analyzed the world around them. A good systems engineer can go one step further than just analyzing and look for solutions to the problems and questions they analyze.

#4 Knowledge of Systems Engineering Software Tool(s)

In this day and age all systems engineers should have some experience with tools. Most colleges, especially grad school level, use systems engineering software tools. These tools allow you to create complex systems. They help you organize your information and develop documentation and reports at a much quicker pace and with higher accuracy. They can also help you analyze your information better. Even though you should already be a pro at analyzing, using a tool can help your organize the information in a way that makes analyzing faster and easier. Tools can make you into a better systems engineer. Tools, such as Innoslate®, are capable of improving you as a systems engineer.

#5 Strong Organizational Skills

You need organizational skills in order to handle the amount of information that a systems engineer deals with on a regular basis. It is important to organize well, so you are able to track status and history accurately and create documents and reports that are understandable. Although a tool can greatly improve the way you organize, you still need to understand organizational concepts.

#6 Ability to See the Small Picture

One of the greatest qualities a systems engineer can have is to be detailed oriented. You should be able to look at the small picture and see that all the details are thoroughly reviewed and that no errors occur. You need to be detail oriented type of person. Much of what we do is planning. Just like if you are an event planner, you have to make sure all the details are just right to make the ultimate goal (the event) a success.

#7 Ability to See the Big Picture

The overall system needs to be looked at just as much as the small details that make up the system. You need to make sure that the goal of the entire system is kept in mind throughout the planning. A good systems engineer needs to be able to determine future needs as well. They must have vision (I talk about this in my upcoming book on LML) and be detail oriented, but still be able to see the big picture.

#8 Well Rounded Background

A bad systems engineer knows systems engineering concepts and definitions like the back of his hand, but knows nothing else. A good systems engineering tries to be knowledgeable in other subjects relating to their field. A great systems engineer understands the importance of being well-rounded. A well rounded background will help a systems engineer analyze and find potential issues better than anyone else.

#9 Communication Skills

Unfortunately, English is not a high priority for many engineering colleges. Systems engineers need to communicate well. They need to be able to communicate to non-engineers. Communication skills take time and practice to perfect. If you are a systems engineer and you know that communication is not a strong skill of yours, make the effort to improve.

#10 Ability to Lead, Follow and Work Well in a Team

At some point in your career you will have led, followed, and worked in a team. The best systems engineers know how to do all three well. A good leader knows how to follow and work together with others. A leader understands what his or her team needs to know and understand. The inability to do all three can be detrimental to a project. Systems engineers, more often than not, do extremely important work and need a good leader and a good team to follow.

 

It takes a lot of time to develop all these qualities. I know I did not have all of them when I began my career. Don’t let this discourage you, but make it a goal to obtain each one of these qualities.

If you think you have these qualities, join our team.

Reposted from SPEC Innovations with permission.

 

Innoslate’s Ontology Webinar

Live Webinar June 6th at 2:30 pm EST

Everyone talks about “data-centricity,” but what does that mean in practical terms. It means that you have to have a well defined ontology that can capture the information needed to describe the architecture or system you work with or want to create. An ontology is simply the taxonomy of entity classes (bins of information) and how those classes are related to each other.

You’ll learn a relatively new ontology, the Lifecycle Modeling Language (LML). LML provides the basis for Innoslate’s database schema. In this webinar, we will discuss each entity class and why it was developed. Dr. Steven Dam, who is the Secretary of the LML Steering Committee, will present the details of the language and how it relates to other ontologies/languages, such as the DoDAF MetaModel 2.0 and SysML. He will also discuss the ways to visualize this information to enhance understanding of the information and how to use that information to make decisions about the architecture or system.

Join us live on July 6th at 2:30 pm EST.

After July 6th 2017, watch the recording here.

Branching and Merging In Innoslate

 

Branching and Merging allows a team member to make large changes in the project without worrying about affecting the overall project.

If a team member wants to do a large change to the project, for example create a model or trace large amounts of artifacts together, then they should Branch out.

Branching and Merging Decision Tree
Branching and Merging Decision Tree

Branched project is a Mapped Copy of the original project. It enables merging which will take the changes the Team member made and integrate them back into the main project, called a trunk.

Branching Procedure
Branching Procedure

If the team member likes what they did and maybe had their changes reviewed then they can merge back into the original project.

Merging Procedures
Merging Procedures

If the team member creates a branched project and doesn’t like their changes, or fails review, then they can simply delete their branch and start over anew.

Delete Branch Project Procedure
Delete Branch Project Procedure

 

 

 

How to Import Complex Documents into Innoslate

One of the first things you want to do when using a requirements or PLM tool is to import complex documents into the tool, so that you can begin analysis. Most documents have pictures, tables and other elements of information that you want to be able to access. Often these complex documents come as an Adobe Portable Document Format (PDF) file; other times they come as Comma Separated Values, MS Word and other formats. In this blog, we will deal with PDFs.

When bringing in a new document, we recommend starting with a new Innoslate project. Also, most documents are a result of non-uniform word processing, which means that import software has to deal with many different possible numbering schemes and formats. This fact makes getting a document into a toll very difficult. Fortunately, Innoslate’s Import Analyzer provides the means to overcome most of this problem, but you may want to do a little bit of work on the file first.

PDFs come in two forms: 1) scanned documents; and 2) selectable documents. The first one requires the use of Optical Character Recognition (OCR) software. We recommend Google’s as it seems to be one of the best, but many other tools for OCR are available. This process converts it into a selectable document.

Once you have the document in an editable PDF format and you have a copy of the latest version of Adobe Acrobat Pro, you can try to save the document as an MS Word file. Adobe tends to do a very good job converting documents this way. You may still want to go through and clean up the document, such as removing the table of contents and other unnecessary information.

After you have the document in the format you want, select the Import Analyzer from the Menu and follow the process of using the “Word (.docx)” tab, selecting the class for import (Next), and dragging the file into the window for import (Step 1). The upload and analysis process may take a minute or so, depending on the size and complexity of the document (Step 2). The analysis includes the creation of parent-child relationships (decomposed by/decomposes) as identified by the numbering scheme.

Once the upload and analysis are complete, just select “Next” and you can see a preview of the information as it has been captured (Step 3). If satisfied, then select save the entities into the database.

Step 1:

Step 2:

Step 3:

The end result of this process is the document being seen in the Requirements View. The analyzer includes any pictures and tables, if they were properly developed that way in the original document (see below).

 

 

If you already have another project to which you want to add this one to, you can export and re-import the Innoslate XML file, or (better) use the branching/forking capability (go to Database View and use the “Branch” button). When creating a new branch, instead of selecting a “New Project” use the “Target” drop down menu to select the project with which you want the document to merge (see right).

 

The process above is a best-case situation for complex documents. Sometimes, this approach to importing fails due to problems in the MS Word document itself.

The second way to import a PDF file is to use the “Plain Text (.pdf, .txt, etc.)” tab in the analyzer (shown below).

Here you need to give the Artifact (the entity that will store the uploaded document) a name, which you can edit later. Again, select the class type for import (we default to Requirements, since that usually what you are importing). Finally, we need to select the type of list contained within the document, again for the purpose of creating the parent-child relationships.

 

 

After clicking the “Next>” button, you can paste the copied text from the file into the space provided. After clicking the Next button on that screen, the analysis proceeds and then you can preview the results as before.

 

Finally, the worst-case scenario is a PDF document that cannot be easily imported using any of the Import Analyzer tools. Although this is rare, it does occur. Recently I was asked to import a portion of the US Code. For anyone who has seen it, it’s double column and contains a lot of unusual characters. So, I determined that the fastest way to bring it into the tool was by cutting and pasting objects into the Requirements View of an Innoslate Project. I used this as an opportunity to conduct analysis on the document as I went. Since I was not under a tight deadline (I had days to perform the task, not hours) and we ultimately wanted to perform requirements analysis anyway, this let me take blocks of text and treat them as Statements, instead of Requirements, when they really only provided context or breakup paragraphs that contained multiple requirements into individual entities to they could be separately traced. That effort took a person day and one half, while the other ones above had taken really only minutes, but no analysis was done.

 

All-in-all Innoslate provides the means to bring any and all information from the outside into the tool. You can then use that information to complete the rest of the lifecycle within the same tool environment (with no plug-ins required).

Document Trees in Innoslate

Different levels of documents result from decomposition of user needs to component-level specifications, as shown in the figure below.

Innoslate enables the user to create such trees as a Hierarchy chart, which uses the “decomposed by” relationship” to show the hierarchy. An example is shown below.

Each of these Artifacts contain requirements at the different levels. Those requirements may be related to one another using the “relates from/relates to” relationship if they are peer-to-peer (i.e. at the same level of decomposition) or using the decomposed by relationship to indicate that they were derived from the higher-level requirement.

This approach allows you to reuse, rather than recreate requirements from a higher-level document. An example is shown in Requirements View below.

 

In this example, the top-level Enterprise Requirements were repurposed for the Mission Needs document (MN.1.1 and MN.1.2) and the System Requirements Document (SRD.5). If you prefer to keep the original numbers, you only have to Auto Number the ERD document using that button on the menu bar and the objects would show up with the ERD prefix in the lower documents. Note that in either case, the uploaded original document would retain the original numbers, in case you wanted to reference them that way. Also, each entity has a Universal Unique Identifier (UUID) that the requirement retains, if you prefer to use that as a reference.

This approach discussed above is only one way to accomplish the development of a document tree. Innoslate enables other approaches, such as using a new relationship (i.e. derived from/derives). Try it the way above and see if it meets your needs. If not, adjust as you like.

What is a PLM Tool?

Product Lifecycle Management (PLM) software integrates cost effective solutions to manage the useful life of a product.

PLM software tools allow you to:

  • Keep aligned with customer’s requirements
  • Optimize cost and resources through simulated risk analysis
  • Reduce complexity with a single interconnecting database
  • Improve and maintain quality of a product throughout the lifecycle

The areas of PLM include: five primary areas:

  1. Systems engineering
  2. Product and portfolio m² (PPM)
  3. Product design (CAx)
  4. Manufacturing process management (MPM)
  5. Product data management (PDM)

Let’s look at each area in more detail.

Systems Engineering

A PLM tool should  support the system engineer throughout the lifecycle by integrating requirements analysis and management (Requirements View and checker), with functional analysis and allocation (all the SysML diagrams, along with LML, IDEF0, N2, and others), with solution synthesis (SysML, LML, Layer Diagram, Physical I/O, etc.), test and evaluation (Test Plans, Test Center), and simulation (discrete event and Monte Carlo). Many PLM tools lack the combination of all these capabilities in this area. Innoslate® was made by systems engineers for systems engineers and is designed for the modern cloud environment that enables massive scalability and collaboration. No other PLM tool has Innoslate’s combination of capabilities in this area.

Product and Portfolio Management

PPM includes pipeline, resource, financial, and risk management, as well as change control. Innoslate® provides all these capabilities and a simple easy to use modeling diagram to capture the business processes, resource and cost load them, and then produce Gantt charts for the timeline. The Monte Carlo simulation also enables the exploration of the schedule and cost risks due to the variation of timing, costs, and resources. This approach is called Model-Based Program Management (MBPM); consider it an important adjunct to the systems engineering work.

Innoslate® also captures risks, decisions, and other program management data completely within the tool using Risk Matrices, and other diagrams. Change control is provided through the baselining of documents, the branching/forking capability, and the object-level history files.

Innoslate® provides a means to develop a program plan that can be linked to the diagrams and other information within the database This feature enables you to keep all your information and documents together in one place.

Product Design

The Product Design area focuses on the capability to capture and visualize product design information from analysis to concept to synthesis. The Asset Diagram enables the addition of pictures to the standard boxes and lines diagrams. This capability enables the development of the high-level concept pictures that everyone needs. Innoslate’s CAD viewer feature allows you to not only view the STL and OBJ files it can create the equivalent Asset Diagram entities through the OBJ file, but this feature also makes the integration between the two tools more seamless. Other physical views, such as the layer diagram and physical I/O help view the physical model in ways that usually required a separate drawing tool.

Manufacturing Process Management

Innoslate provides great process planning and resource planning capabilities using the Action Diagram and other features discussed above. Direct interface from Innoslate® to other tools can be accomplished using the software development kit (SDK) application programmer interfaces (APIs). If the MPM tools have Internet access, you can use the Zapier Integration capability, which provides an interface to over 750 tools, ranging from GitHub to PayPal to SAP Jam Collaboration. In addition, Innoslate is routinely used for Failure Modes and Effects Analyses (FMEA), which is critical to MPM.

Product Data Management

Capturing all the product data, such as the part number, part description, supplier/vendor, vendor part number and description, unit of measure, cost/price, schematic or CAD drawing, and material data sheets, can easily be accomplished using Innoslate. Most of the entities and attributes have already been defined in the baseline schema, but you can easily add more using the Schema Editor. You can develop a complete library of product drawings and data sheets by uploading electronic files can be uploaded as part of the “Artifacts” capture in the tool. Construction of a Work Breakdown Structure (WBS) and Bill of Materials (BOM) is also simply a standard report from the tool.

As a systems engineer, it’s important to allow information to drive your decisions. This can only be obtained through detailed functional analysis and an underlying scalable database. And is best accomplished with a PLM software tool that  encompasses all 5 areas illustrated above. Innoslate meets or excels in every area, so you are better equipped to face high-risk decisions.

Do Frameworks Really Work?

Over the past 20 years, we have been using various “Frameworks” to capture information about architectures and systems. Beginning perhaps with the Zachman Framework, which is still one of the most successful. The Zachman Framework is timeless for this reason: it forced us to think about the standard interrogatives (Who, What, When, Why, Where, and How) and different perspectives (Executive, Business Management, Architect, Engineer, and Technician) as applied to the enterprise. The idea was to build models in each area, starting with simple lists and working into detailed models of the systems at a component level.

Other frameworks followed, included the DoD Architecture Framework (DoDAF – which came from the C4ISR Architecture Framework), the Ministry of Defence Architecture Framework (MODAF), the NATO Architecture Framework, and many others. All these Frameworks were built on the idea that the information could be easily “binned” into these boxes, but as we know many of the models in these Frameworks included maps between different data elements, such as the CV-6 from DoDAF that maps Capabilities to Operational Activities. Recently, the Object Management Group (OMG) and others have been trying to push out a Unified Architecture Framework (UAF) that combines the models from many of these other frameworks into a single, very large framework. OMG is also responsible for developing the Unified Modeling Language (UML), Systems Modeling Language (SysML), and the Unified Profile for DoDAF and MODAF (UPDM).

All of these framework and languages are ways to capture information about a system or enterprise and show them as a set of data, often as a picture or model with two or more pieces of information in the model. An example is the DoDAF OV-5a, which is a functional hierarchy of Operational Activities, which only contains a single entity class and the decomposition relationship between those entities. Another example is the OV-5b, which shows the Operational Activities and the “Resource Flows” (which many of us recognize as inputs and outputs to/from the Operational Activities) between them. Thus, we have now two entity classes and the associated relationships between them. Obviously, the information in the CV-6, OV-5a, and OV-5b overlap in that the same Operational Activities need to show up in each of these models. But how many of these different models would we need to complete describe a system or enterprise?

An alternative way to capture information is to use an ontology (and the DoDAF 2.02 is based on a rather large ontology – the DoDAF MetaModel 2.0 or DM2) that captures the information in a finite number of classes (or bins) and a set of relationships between these classes. The classes and relationships can have attributes associated with them, which are also pieces of information. At the ontology level, all we see is a mass of data, so most of us want to see the pictures, and most architects and system engineers seem to prefer this approach.

An alternative to the DM2 is the Lifecycle Modeling Language (LML), which contains both an ontology and a set of diagrams to capture the necessary technical and program information associated with a project. This language uses an ontology that appears simpler than the DM2, but actually hides the complexity through the large number of relationships between entities and the fact that the relationships can have attributes associated with them. LML purports to be the 80% solution to the ontology, meaning that you may decide to extend it to meet your specific needs. But let’s just stick with it. LML has 20 classes of entities, and each entity class has a number of relationships associated with it (over 40 in total). So, if we ignore the attributes, we have over 20(20-1)/2=190 combinations of information possible. Does that mean I need 190 or more diagrams to visualize all this information, perhaps – it could be less, but it could be more. Can we really have that many different diagrams to represent this information, which is what a Framework would require? And of course, if we add in all the attributes to both the classes and relationships, then we are trying to display a lot more information than this.

So, Frameworks are a useful starting point, which is how John Zachman uses his framework, and it may be enough for enterprise architecture, but it’s not a panacea for all of systems engineering problems. Sooner or later as we want to decompose the system to a level we can determine what to buy or build, as well as manage these kinds of projects, you likely will need to use a more robust approach. LML and this kind of ontology makes it much easier to capture, manage, and visualize the information. See for yourself. Go to www.innoslate.com and try out the tool that uses LML for free. Explore the Schema Editor to see the entire ontology. Play with the “Getting Started” panel examples. I think once you do you will find this approach works much better than the Frameworks. In addition, Innoslate has a “DoDAF Dashboard” that enables you to create DoDAF models directly from the dashboard, so if that’s what you are more familiar with, you will find it the easiest to get started. Notice that may of the other projects are automatically populated with the information from the other models. That’s because Innoslate reuses that information to automatically create the other views!

Why Requirements Management Needs Analysis

When we talk to potential customers who are seeking requirements management tools, we begin with asking them: “What is your requirements management process?” Usually they say, “we get requirements from our customers and then trace them to our products.” In areas where the customer knows exactly what they need and have been doing it for many, many years, this might work. But in the fast paced businesses of today, that approach will miss a lot of the real requirements, gaps, and risk areas. Ignorance of these requirements usually means product failure or worse, loss of life. Your company is then liable for that failure and in the US that usually means a lawsuit. If you can’t prove that you did the necessary analyses, your company will usually lose the lawsuit.

If they say they do analysis on those original customer requirements we than ask: “How do you do requirements analysis?” Often the answer is: “Well, we read them and look for shall statements.” Again, this is insufficient and will lead to failure! A requirement must be:

  • Clear: Clear represents if this Requirement is unambiguous and not confusing.
  • Complete: Complete represents if this Requirement expresses a whole idea.
  • Consistent: Consistent represents if this Requirement is not in conflict with other requirements.
  • Correct: Correct represents if this Requirement describes the user’s true intent and is legally possible.
  • Design: Design represents if this Requirement does not impose a specific solution on design; says “what”, not “how”.
  • Feasible: Feasible represents if this Requirement is able to be implemented with existing technology, and within cost and schedule.
  • Modular: Modular represents if this Requirement can be changed without excessive impact on other requirements.
  • Traceable: Traceable represents if this Requirement is uniquely identified, and able to be tracked to predecessor and successor lifecycle items/objects.
  • Verifiable: Verifiable represents if this Requirement is provable (within realistic cost and schedule) that the system meets the requirement.

If they say they understand the need for complete requirements analysis, then we ask: “Did you know that functional analysis and modeling is a critical part of requirements analysis and management?” Again, most of the time the answer we receive is: “No, we don’t need that! All we need is a cheap tool that traces the requirements to product.” As you might expect this approach also leads to failure.

The Department of Defense (DoD) recognizes this problem. They define the purpose of requirements analysis as:

“To (1) obtain sets of logical solutions [functional architectures] derived from defined Technical Requirements, and (2) to better understand the relationships among them. Once these logical solution sets are formed, allocated performance parameters and constraints are set, and the set of Derived Technical Requirements for the system can be initially defined.

Outcomes: For each end product of a system model:

  1. A set of logical models showing relationships (e.g., behavioral, functional, temporal, data flows, etc.) among them
  2. A set of ‘design-to’ Derived Technical Requirements”

Most experts use a process like the one below:

Notice that functional analysis and allocation is in the center of this process. Other process models have this as well, but tend to bury it. That is a mistake. What functional analysis does is enable the analyst to create user stories (sometimes called use cases, scenarios, or threads) that can be used to validate the user requirements. As you conduct this analysis by creating a functional model, you can identify gaps and problem areas that the user may not have originally considered. You can also use these models to derive the performance requirements and identify other constraints using simulation.

The figure below shows a functional model of a process for fighting forest fires. It includes data entities and resources used in the operation. The rounded rectangular blocks represent the functional requirements for the operation. Even though some of these functions may currently be performed by people, many of them can be automated to enhance the performance of the overall system. Many of these functional requirements would be missing if you just asked people what the requirements were.

We can better understand the performance requirements by execution of this model in a simulation. An example of the results from a discrete event simulation is shown below:

 

These results show the time, cost and resources (water) used to put out the fire. By running these kinds of simulations, we can easily determine cost, schedule and performance metrics needed to accomplish the operation. Isn’t it better to know that early in the design phase rather than later?

The end result of performing these analyses is a much better set of requirements to build or buy what the users need. It will also build the case that you did the due diligence in this litigious society. That could mean the difference between a successful project and a failed project.