Why Do We Model?

We often say that the job of a systems engineer is to “optimize the system’s cost, schedule, and performance, while mitigating risks in each of these areas.” Note that this is essentially the same thing that the program manager does for the program, hence the close relationship between the two disciplines.

Everyone is talking about “Model-Based Systems Engineering” or MBSE, but why are we modeling? What are we supposed to be getting out of these models? To answer these questions, we have to go back to basics and talk about what we are doing as systems engineers.

Another aspect of systems engineering is that we need to be the honest broker by optimizing the design between all the different design disciplines. The picture below shows what would happen if we let any one particular discipline dominate the design.

Our modeling must support both these optimization goals: 1) cost, schedule, performance, and risk; 2) design disciplines. So how does modeling support that?

Using the Lifecycle Modeling Language and its implementation in the Innoslate® tool, we easily accomplish both tasks. For the cost, schedule, and performance optimization, we use only two diagrams: Action and Asset; along with the ontology entity classes of Actions, Assets, Input/Output, and Conduits as the primary entities in these diagrams. But Innoslate® has included Resources as well as allocation of Actions to Assets (performed by/performs relationship) and Input/Outputs to Conduits (transferred by/transfers). This capability to allocate entities to each other allows the functional model to be constrained by the physical model. This constraint occurs by the fact that Input/Outputs have a size and the Conduits have latency and capacity. Thus, we can calculate the appropriate delays for transmission of data or power or any other physical flow.

The Resources can be used to represent key performance parameters like weight (mass) and power. Actions can produce, seize, or consume Resources. Another key performance parameter is timing. Time is included in the Action as the duration for each step and of course each Action can be decomposed and the timings associated with each of these subordinate steps can accumulate to result in the overall system timings of interest. We can see how this approach gives us the necessary information to predict performance of the system.

Note that we can model the business, operations, or development processes this same way and thus use this modeling to derive the overall schedule for the program. So, we get to the Schedule part of optimization as well using the same approach. Talk about reducing confusion between the systems engineering and program management disciplines.

But let’s not forget Cost. Since LML defines an independent Cost class, we can use that to identify the costs incurred by personnel in each step of the process and consumption of resources.

So now if we can dynamically accumulate these performance parameters, schedule, and cost elements through process execution, we have the first part of our first optimization goal. Of course, we can easily execute this model by using the discrete event simulator built into Innoslate®. Execution of the model will occur for at least one path through the model. The tool accumulates the values for cost, produces a Gantt Chart schedule, and tracks the Resource usage over time, which leads us to performance.

But how do we get to risk? That’s where we find out that the values we use for the size, latency, capacity, duration, and other numerical attributes of these entities can be represented by distributions. With these distributions, we can now execute the built-in Monte Carlo simulator to execute the model as many times as needed to create the distributions for cost, schedule and performance (Resources). These distributions represent the uncertainties of achieving each item. Those uncertainties are directly related to the probabilities of occurrence for the risk in each area. If we add consequence to this probability, we have the estimated value for Risk. Of course, LML gives us a Risk class, which has been fully implemented in Innoslate® and visualized using the Risk Matrix diagram.

Now that we have the first optimization complete, how do we get to the next one: optimization across the design disciplines. LML comes into play there as well. LML is an easy to understand language designed for all the stakeholders, including the design engineers, management, users, operators, cost analysts, etc. They all can play their role in the system development, many using their own tools. LML provides that common language that anyone can use and we can easily translate what the Electrical or Mechanical or whatever Engineer does into this language. Innoslate® also provides the capability to store and view CAD files. Results from Computational Fluid Dynamics (CFD) codes or other physics-based models can also be captured as Artifacts. We can take the summary results and translate them into the performance distributions used in the Monte Carlo calculations. For example, if we use Riverbed to characterize the capacity (bandwidth) and latency of a network, we take those resulting distribution and use them to refine our model. We can then rerun the Monte Carlo calculation and see the impact.

LML and Innoslate® give us the capability to meet the optimization goals for all systems engineering and program management in a simple, easy to explain to decision makers way. Think of LML and Innoslate® as modeling made useful.

A Synopsis of CSER 2019 (Conference on Systems Engineering Research)

Thoughts from the Conference on Systems Engineering Research (CSER) held last week at the National Press Club in Washington DC.  

By Christian Manilli

 

As many of you may know the CSER was organized by the Systems Engineering Research Center (SERC), the University-Affiliated Research organization of the US Department of Defense.  The SERC leverages the research and expertise of senior researchers from 22 universities throughout the United States.  The SERC is unique in the depth and breadth of its reach, advocacy and evangelism of the systems engineering community through its support of research and education of systems engineering.  Events like CSER show the commitment to the advocacy of the Systems Engineering profession. 

 

Ok enough of the ad for the SERC. The two day event was filled with an impressive line-up of speakers with deep understanding of practice and nuance of the subject.  The venue was exceptional and the opportunity to interact with peers and colleagues from academia, industry and the research community was eye opening.  

 

As I am a relatively new member of the SPEC Innovations team I had the good fortune to attend the conference with Dr. Steve Dam.   We shared with the CERC community information about a free for academic use, model-based systems engineering software, Innoslate. Innoslate is currently being used in over 100 Universities around the world. The first day had a succession of remarkable speakers, the first Key Note was James Faist who is the director of Research and Engineering for DOD and he covered Advanced Capabilities.  Break out sessions were held during the mid-morning and topics covered ranged from AI to Agile SE to SE effectiveness.  Lunch then the second Key Note was given by Dr Jeffery Wilcox Head of Digital Transformation at Lockheed Martin.   Who would have known that interfaces between systems and process are becoming more important than they already were.  This was followed by afternoon breakout sessions and an evening reception.  

 

It was funny during the morning of the first day I noticed an older gentleman at the next table who seemed to be keeping his own company.  I nodded to him he nodded back and I went about going to a break-out session.  Day two’s morning Keynote speaker was Kristen Baldwin Deputy Director for Strategic technology Protection and detection at DOD.  She covered Strategic Technology Protection and Exploitation.  This was followed up by breakout sessions that ranged from Model Based Engineering to SE Decision Making and Resilience.  Lunch was next followed by the final Keynote speaker.   To my surprise the final speaker was also the older gentleman that I had noticed by himself the previous day. He happened to be William Shepard a retired Navy Captain who previously had served as a Navy SEAL platoon commander and Operations Officer.  About mid-way through his Naval career Captain Shepard decided to put in a package to the Astronaut program.  He was selected to the Astronaut Corps and made three space shuttle flights.  He then went on to command the initial flight that built the International Space Station.

 

His Keynote presentation started with a video presentation of the preparation and training of his Space Station crew.  It was a three man team commanded by Captain Shepard and two Russian Flight Engineers.  All of the training and the launch was conducted in Russia and Kazakhstan. They reached orbit and began the initial construction of the International Space Station.  Several Shuttle missions docked with the Station during this time as construction of the Station grew to add a living space, solar panel and multiple labs. After more than four months the crew returned to earth. 

 

What struck me the most about Captain Shepard’s address was the enormous Systems Engineering challenge that was overcome during the planning, design, integration, test and most importantly to them in the operations of the crew.  If you think of the challenge of a joint NASA/Russian space mission the Systems Engineering challenges are enormous.  All three crewmen needed to be fluent in both English and Russian.  Systems and modules were made by multiple NASA contractors, multiple Russian contractors and multiple European Space Agency contractors.   Yet all were able to be integrated and distilled into a model-based approach.  Captain Shepard said that the key to adding the new modules, troubleshooting onboard issues and scheduling upgrades was the Model based approach that was used.  If he and the Russian crew members would come to a language impasse they would reach for the SE models that both intertwined in depth schematics, color coded processes and color coded call outs.  These models were used to ensure that all crew members understood the approach, and then they could take action.

 

However, the models he was talking about were the “old fashioned” physical models, not Model-based Systems Engineering (MBSE) digital models. The question is can we replace the physical models with digital ones? The answer “yes” can only happen when we obtain enough data to fully represent the physical system and have a way to organize and visualize that data. Innoslate provides a large step in that direction. It already provides the cloud-native access to large amounts of computational power and data storage. It also already has many ways to visualize and track the data in the database. What’s the next step? Continue to watch as SPEC Innovations pushes the boundaries of MBSE.

The Future of Systems Engineering

 

I attended an interesting systems engineering forum this past week. A fair number of government organizations and contractors were participants. There were many interesting observations from this forum, but one presenter from a government agency said something that particularly struck me. He was saying that one of the major challenges he faced was finding people who were trained in UML and SysML. It made me think: “Why would it be difficult to find people trained in UML? Wasn’t UML a software development standard for nearly the last 20 years? Surely it must be a major part of the software engineering curriculum in all major universities?”

The Unified Modeling Language (UML) was developed in the late 1990s-early 2000s to merge competing diagrams and notations from earlier work in object-oriented analysis and design. This language was adopted by many software development organizations in the 2000s. But as the graph below shows, search traffic for UML has substantially declined since 2004.

This trend is reinforced by a simple Google search of the question: “Why do software developers not use UML anymore?”

It turns out that the software engineering community has moved on to the next big thing: Agile, which now systems engineers are also trying to morph into a systems engineering methodology, just like when they created the UML profile: Systems Modeling Language (SysML).

This made me wonder, “Do Systems Engineers try to apply software engineering techniques to perform systems engineering and thereby communicate better with the software engineers?” I suddenly realized that I have been living through many of these methodology transitions from one approach to software development and systems development to another.

My first experience in modeling was using flow charts in my freshman year class on FORTRAN. FORTRAN was the computer language used mostly by the scientific community in the 1960s through the 1990s. We created models using a standard symbol set like the one below.

Before the advent of personal computers these templates were used extensively to design and document software programs. However, as we quickly learned in our classes, it was much quicker to write-execute-debug the code than it was to draw by hand these charts. Hence, we primarily used flowcharts to document, not design the programs.

Later in life, I used a simplified version of this to convert a rather large (at the time) software program from the Cray computers to the VAX computers. I used a rectangular box for most steps and a rectangular box with a point on the side to represent decision points. This similar approach provided the same results in a much easier to read and understandable way. You didn’t have to worry about the nuanced notations or become an expert on them.

Later, after getting a personal computer (a Macintosh 128K) I discovered some inexpensive software engineering tools that were available for that platform. These tools were able to create Data Flow Diagrams (DFDs) and State Transition Diagrams (STDs). At that time, I had moved from being a software developer into project management (PM) and systems engineering (SE). So, I tried to apply these software engineering tools to my systems problem, but they never seemed to satisfy the needs of the SE and PM roles I was undertaking.

In 1989, I was introduced to a different methodology in the RDD-100 tool. It contained the places to capture my information and (automatically) produced diagrams from the information. Or I could use the diagrams to capture the information as well. All of a sudden, I had a language that really met my needs. Later CORE applied a modified version of this language and became my tool of choice. The only problem was no one had documented the language and went to the effort of making it a standard, so arguments abounded throughout the community.

In subsequent years I watched systems engineers argue between functional analysis and object-oriented methods. The UML camp was pushing object-oriented tools, such as Rational Rose, and the functional analysis tools, such as CORE. We used both on a very major project for the US Army (yes that one) and customers seems to understand and like the CORE approach better (from my perspective). On other programs, I found a number of people using a DoD Architecture Framework (DoDAF) tool called Popkin Systems Architect, which was later procured by IBM (and more recently sold off to another vendor). Popkin included two types of behavioral diagrams: IDEF0s and DFDs. IDEF0 was another software development language that was adopted by systems engineers while software developed had moved on to the object-oriented computer and modeling languages.

I hope you can now see the pattern: software engineers develop and use a language, which is later picked up by the systems engineering community; usually at a point where its popularity in the software world is declining. The systems engineering community eventually realizes the problems with that language and moves on. However, one language has endured. That is the underlying language represented in RDD-100 and CORE. It can trace its roots back to the heyday of TRW in the 1960s. That language was invented and used on programs that went from concept development to initial operational capability (IOC) in 36 months. It was used for both the systems and software development.

But the problem was, as noted above, there was no standard. A few of us realized the problems we had encountered in trying to use these various software engineering languages and wanted to create a simpler standard for use in both SE and PM. So, in 2012 a number of us got together and developed the Lifecycle Modeling Language (LML). It was based on work SPEC Innovations has done previously and this group validated and greatly enhanced the language. The committee “published” LML in an open website (www.lifecyclemodeling.org) so it could be used by anyone. But I knew even before the committee started that the language could not be easily enhanced without it being instantiated in a software tool. So, in parallel, SPEC created Innoslate®. Innoslate (pronounced “In-no-Slate”) provided the community with a tool to test and refine the language and to use it to map to other languages, including the DoDAF MetaModel 2.0 (DM2) and in 2014 SysML (included in LML v. 1.1.). Hence, LML provides a robust ontology for SysML (and UML) today. But it goes far beyond SysML. Innoslate has proven that many different diagram types (over 27 today) can be generated from the ontology, including IDEF0, N2, and many other forms of physical and behavioral models.

Someone else at the SE Forum I attended last week said something also insightful. They talked about SysML as the language of today and that “10 years from now there may be something different.” That future language can and should be LML. To quote George Allen (the long-departed coach of the LA Rams and Washington Redskins): “The Future is Now!”

 

Why MBSE Still Needs Documents

A lot of people are pushing Model-Based Systems Engineering (MBSE) in a way to just deliver models … and by models they mean drawings. The drawings can and should meet the criteria provided by the standards, be it SysML, BPMN, or IDEF. But ultimately as systems engineers we are on the hook to deliver documents. These documents (specifications) form the basis for contracts and thus have significant legal ramifications. If the specifier uses a language that everyone does not understand and only supplies drawing in the model they deliver, confusion will reign supreme. Even worse, if the tool does not enforce the standards and allows users to put anything on the diagram, then all bets are off. You can imagine that the lawyers salivate over this kind of situation.

But it’s even worse really, because not only are diagram standards routinely ignored, but so are other best practices, such as including a unique number on every entity in the database or a description of each entity. As simple as this sounds, most people ignore doing these simple things until later, if ever. This leads us to our first question:  1) Is a model a better method to specify a system?

This question requires us to look at the underlying assumption behind delivering models vs. a document. The underlying assumption is that the model provides a better communication of the complete thoughts behind the design so that the specification is easier to understand and execute. Which leads us to the next question: 2) Can a document provide the same thing?

Not if we use standard office software to produce the document. The way it is commonly done today is that someone writes up a document in a tool like MS Word and then that files is shipped around for everyone to comment on (using track changes naturally) and then all the comments are adjudicated in a “Comment Matrix.” Once that document is completed someone converts it to PDF (a simple “Save as …” in MS Word). In the worst case, someone prints the document and scans it into a PDF. Now we have lost all traceability or even the ability to hyperlink portions of the information to other parts of the design, making requirements traceability very difficult.

However, if you author your document in a tool like Innoslate, you can use its Documents View to create the document as entities in the database. You can link the individual entities using the built-in or user created relationships to trace to other database entities, such as the models in the Action Diagram, or Test Cases. This provides traceability to both a document and the models. In fact, the diagrams in Innoslate can be embedded in the document as well, thus keeping it live, reducing the configuration management problem inherent in the standard approach.

MBSE doesn’t mean the end of documents but using models to analyze data and create more informative documents. Using a tool like Innoslate lets you have the best of both worlds: documents and models in one complete, integrated package.

How to Choose the Right MBSE Tool

Find the Model-Based Systems Engineering Tool for Your Team

A model-based systems engineering tool can provide you with accuracy and efficiency. You need a tool that can help you do your job faster, better, and cheaper. Whether you are using legacy tools like Microsoft Office or are looking for a MBSE tool that better fits your team, here are some features and capabilities you should consider.

Collaboration and Version Control

It’s 2018. The MBSE tool you are looking at should definitely have built in collaboration and version control. You want to be able to communicate quickly and effectively with your team members and customers. Simple features such as a chat system and comment fields are a great start. Workflow and version control are more complex features but very effective. Workflow is a great feature for a program manager. It allows the PM to design a process workflow for the team that sends out reminders and approvals. Version control lets users work together simultaneously on the same document, diagram, etc. If you are working in a team of 2+ people, you need a tool with version control. Otherwise you will waste a lot of time waiting for a team member to finish the document or diagram before you can work on it.

Built in Modeling Languages Such as LML, SysML, BPML, Etc.

Most systems engineers need to be able to create uniformed models. LML encompasses the necessary aspects of both SysML and BPML. If you would like to try a simpler modeling language for complex systems, LML is a great way to do that. A built in modeling language allows you to make your models correct and understandable to all stakeholders.

Executable Models

A MBSE tool needs to be much more than just a drag and drop drawing tool; the models need to be executable. Executable models ensure accurate processes through simulation. Innoslate’s activity diagram and action diagram are both executable through the discrete event and Monte Carlo simulators. With the discrete event simulator, you will not only be able to see your process models execute, but you will able to see the total time, costs, resources used, and slack. The Monte Carlo simulator will show you the standard deviation of your model’s time, cost, and resources.

Easy to Learn

It can take a lot of time and money to learn a new MBSE tool. You want a relatively short learning curve. First, look for a tool that has an easy user interface. A free trial, sandbox, or account to get started with is a major plus. This let’s you get a good feel for how easy the tool is to learn. Look for tools that provide free online training. It’s important that the tool provider is dedicated to educating their users. They should have documentation, webinars, and free or included support.

Communicates Across Stakeholders

Communication in the system/product lifecycle is imperative. Most of us work on very diverse teams. Some of us have backgrounds in electrical engineering or physics or maybe even business. You need to be able to communicate across the entire lifecycle. This means the tool should have classes that meet the needs of many different backgrounds, such as risk, cost, decisions, assets, etc. A tool that systems engineers, program managers, and customers can all understand is ideal. The Lifecycle Modeling Language (LML) is a modeling language designed to meet all the stakeholder needs.

Full Lifecycle Capability

A tool with full lifecycle capability will save you money and time. If you don’t choose a tool with all the features needed for the project’s lifecycle, you will have to purchase several different tools. Each of those tools can cost the same amount as purchasing just one full lifecycle MBSE tool. You will also have to spend money on more training since you will not be able to do large group training. Most tools do not work together, so you will have spend resources on integrating the different tools. This causes the overall project to cost a lot more. This is why Innoslate is a full lifecycle MBSE solution.

 

It’s important to find the tool that is right for your project and your team. These are just helpful guidelines to help you find the right tool for you. You might need to adjust some of these guidelines for your specific project. If you would like to see if Innoslate is the right tool for your project, get started with it today or call us to see if our solution is the good fit for you.

 

Why Do We Need Model-Based Systems Engineering?

MBSE is one of the latest buzzwords to hit the development community.

The main idea was to transform the systems engineering approach from “document-centric” to “model-centric.” Hence, the systems engineer would develop models of the system instead of documents.

But why? What does that buy us? Switching to a model-based approach helps: 1) coordinate system design activities; 2) satisfy stakeholder requirements; and 3) provide a significant return on investment.

Coordinating System Design Activities

The job of a systems engineer is in part to lead the system design and development by working with the various design disciplines to optimize the design in terms of cost, schedule, and performance. The problem with letting each discipline design the system without coordination is shown in the comic.

If each discipline optimized for their area of expertise, then the airplane (in this case) would never get off the ground. The systems engineer works with each discipline and balances the needs in each area.

MBSE can help this coordination by providing a way to capture all the information from the different disciplines and share that information with the designers and other stakeholders. Modern MBSE tools, like Innoslate, provide the means for this sharing, as long as the tool is easy for everyone to use. A good MBSE tool will have an open ontology, such as the Lifecycle Modeling Language (LML); many ways to visualize the information in different interactive diagrams (models); ability to verify the logic and modeling rules are being met; and traceability between all the information from all sources.

Satisfying Stakeholder Requirements

Another part of the systems engineers’ job is to work with the customers and end-users who are paying for the product. They have “operational requirements” that must be satisfied so that they can meet their business needs. Otherwise they will no longer have a business.

We use MBSE tools to help us analyze those requirements and manage them to ensure they are met at the end of the product development. As such, the systems engineer becomes the translator from the electrical engineers to the mechanical engineers to the computer scientists to the operator of the system to the maintainer of the system to the buyer of the system. Each speaks a different language. The idea of using models was a means to provide this communications in a simple, graphical form.

We need to recognize that many of the types of systems engineering diagrams (models) do not communicate to everyone, particularly the stakeholders. That’s why documents contain both words and pictures. They communicate not only the visual but explain the visual image to those who do not understand it. We need an ontology and a few diagrams that seem familiar to almost anyone. So, we need something that can model the system and communicate well with everyone.

Perhaps the most important thing about this combined functional and physical model is it can be tested to ensure that it works. Using discrete event simulation, this model can be executed to create timelines, identify resource usage, and cost. In other words, it allows us to optimize cost, schedule, and performance of the system through the model. Finally, we have something that helps us do our primary job. Now that’s model-based systems engineering!

Provides a Significant Return on Investment

We can understand the idea of how systems engineering provides a return on investment from the graph.

The picture shows what happens when we do not spend enough time and money on systems engineering. The result is often cost overruns, schedule slips, reduced performance, and program cancellations. Something not shown on the graph, since it is NASA-related data for unmanned satellites, is the potential loss of life due to poor systems engineering.

MBSE tools help automate the systems engineering process by providing a mechanism to not only capture the necessary information more completely and traceably, but also verify that the models work. If those tools contain simulators to execute the models and from that execution provide a means to optimize cost, schedule, and performance, then fewer errors will be introduced in the early, requirements development phase. Eliminating those errors will prevent the cost overruns and problems that might not be surfaced by traditional document-centric approaches.

Another cost reduction comes from conducting model-based reviews (MBRs). An MBR uses the information within the tool to show reviewers what they need to ensure that the review evaluation criteria are met. The MBSE tool can provide a roadmap for the review using internal document views and links and provide commenting capabilities so that the reviewers’ questions can be posted. The developers can then use the tool to answer those comments directly. By not having to print copies of the documentation for everyone for the review, and then consolidate the markups into a document for adjudication, we cut out several time-consuming steps, which reduce the labor cost of the review an order of magnitude. This MBR approach can reduce the time to review and respond to the review from weeks to days.

Bottom-line

The purpose for “model-based” systems engineering was to move away from being “document-centric.” MBSE is much more than just a buzzword. It’s an important application that allows us to develop, analyze, and test complex systems. We most importantly need MBSE because it provides a means to coordinate system design activity, satisfies stakeholder requirements and provides a significant return on investment.  The “model-based” technique is only as good the MBSE tool you use, so make sure to choose a good one.

Why Use an Integrated Solution for Requirements Management and MBSE?

Systems Engineering and Requirements Management

One of the questions we get most is “Why should we get a tool that does both requirements management and MBSE?” With the growth of SaaS products, we are seeing industries all over using an integrated platform. For example, the crm we use, Hubspot, has taken advantage of the unique capabilities of the SaaS market. Hubspot integrated the entire marketing lifecycle into one place from content management, analytics, email automation, etc. We’ve greatly benefited from using only one tool for all of our marketing efforts. Innoslate provides similar benefits with its integrated platform for the entire product lifecycle including the the process modeling, risk analysis, and testing.

A lot of times, we hear people tell us that they “only do the requirements management or the process modeling part of the lifecycle,” therefore they don’t need an integrated platform. Think about what’s your overall goal when you are capturing, managing, and analyzing requirements? You are certainly not just doing this entire process because you enjoy paperwork. You’re goal in requirements management is to meet the goals and needs of stakeholders to complete a product or process. For instance, to create medical devices, large networks, or develop an autonomous vehicle. Let’s keep using the autonomous vehicle example. You’ve done your research, you have 1000s and 1000s of high quality requirements. The document has a complete layout including roadside infrastructure, vehicle safety, DOT regulations, vehicle communication, etc. You know exactly what you are building. The next step is the how you are going to build it.  Whether or not you are the one that does the ‘how,’ at some point in this system or product both parts will have to be completed. Even if the requirements team is handing the project off to another team for analysis and testing, it’s still best to only use one tool. If you don’t, you’ll wind up spending a long time transfering and organizing data. Unfortunately, you’ll probably lose some data in that process, which will result in not having full traceability. With a tool like Innoslate you can develop requirements then create the process of how you will build it.

 

Long story short…There are a lot of reasons you should integrate requirements management with model-based systems engineering.

 

Here’s seven to start with.

 

Saves the entire company time

This is especially true for large organizations. Software tools require training and management. If you have two different tools, you have to do double the training. IT has to spend time managing both tools. It’s even worse if you are putting the software on a server, then IT have to manage two servers. If you want api integration between software, they are probably the ones that have to set that up too. You and your team will lose weeks of time importing and exporting data between tools.  

Saves money

One tool costs much less than two. Innoslate costs much less than most requirements management tools and mbse tools. It’s certainly less than having to buy both. You also don’t have to pay for any additional plugins to make the tools work with each other.

 

Time is money. You are going to save a lot of money by not wasting employee’s time importing all that data into another tool or having IT manage two servers and force integration.

 

Keeps your data all in one place

Besides saving lots of time and stress, keeping your data in one place also reduces risk. You can easily lose data (and not even realize it) during the import/export process. Every tool has a different underlying schema, which can make it extremely challenging to import all the data exactly as the requirements manager intended. This can result in a loss of traceability and make the requirements manager look bad. With using one tool you can easily trace requirements, actions, documents, test processes and more to each other. Innoslate allows you to create relationships from almost anywhere making it simple to create full traceability through the entire project.

 

Collaboration.

Many times requirements managers and engineers struggle to communicate. We just pass our requirements onto the engineers to let them do the next steps. No one wants this to occur. We want both sides to effectively communicate, so they can create a lessons learned. Using two different tools just makes this entire process harder. Innoslate provides version control, chat, comment, and more to make collaborating with diverse teams easy. So next time the engineers want to give the requirements team feedback, they can use the same Innoslate project.

 

Document the Requirements Management Process

What better way to create a requirements management process then to use the advanced built-in models in a MBSE tool? In Innoslate you can model the entire requirements management process. From there you can trace back each action to the requirements or any document. You can also use the collaboration features to make the process a workflow. This way team members will receive email notifications when it’s their turn to do the next step or when something has been approved.

 

Develop requirements from models

If you use a tool that integrates both requirements management and MBSE, you can actually develop low level requirements from the models. The beauty in creating requirements from models is it allows you to analyze the current process and the future process and from that you can understand what you need. This is especially great if you are starting a requirements document from scratch. Innoslate has a built-in feature that automatically creates requirements documents directly from your models. Here’s a great article that shows you how: “Developing Requirements from Models – How and Why?”

 

Gain Valuable Insight

Applying MBSE to requirements management gives you valuable insight you wouldn’t be able to have otherwise. Creating executable models allows you to use the monte carlo simulator, so that you can calculate the variance in cost, time, and resources.Without even leaving the program you can trace back to the requirements document to see if the proposed process meets the specified cost, time, and resources specified in the requirements.

 

Save your organization a whole lot of time, money, and stress by using an integrated solution for a project’s lifecycle. Having the requirements management, modeling, and testing all in one easy to access place will make everyone’s life easier. If you would like to try an integrated requirements management and MBSE solution, sign up for Innoslate at innoslate.com/sign-up.

How to Use Innoslate to Perform Failure Modes and Effects Criticality Analysis

“Failure Mode and Effects Analysis (FMEA) and Failure Modes, Effects and Criticality Analysis (FMECA) are methodologies designed to identify potential failure modes for a product or process, to assess the risk associated with those failure modes, to rank the issues in terms of importance and to identify and carry out corrective actions to address the most serious concerns.”[1]

FMECA is a critical analysis required for ensuring viability of a system during operations and support phase of the lifecycle. A major part of FMECA is understanding the failure process and its impact on the operations of the system. The figure below shows an example of how to model a process to include the potential of failure. Duration attributes, Input/Output, Cost and Resource entities can be added to this model and simulated to begin estimating metrics. You can use this with real data to understand the values of existing systems or derive the needs of the system (thresholds and objectives) by including this kind of analysis in the overall system modeling.

action diagram fmea

Step one is to build this Action Diagram (for details on how to do this please reference the Guide to Model-Based Systems Engineering. Add a loop to periodically enable the decision on whether or not a failure occurs. The time between these decisions can be adjusted by the number of iteration of the loop and the duration of the “F.11 Continue Normal Operations” action.

Adjust the number of iterations by selecting the loop action (“F.1 Continue to operate vehicle?”) and press the </>Script button (see below). A dialog appears asking you to edit the action’s script. You can use the pull-down menu to select Loop Iterations, Custom Script, Probability (Loop), and Resource (Loop). In this case, select “Loop Iterations.” The type in the number (choose 100) as see in the figure below.

Next change the duration of this action and the F.11. Since the loop decision is not a factor in this model, you can give it a nominally small time (1 minute as shown). For the “F.11 Continue Normal Operations” choose 100 hours. When combined with the branch percentage of this path of 90%, means that we have roughly 900 operating hours between failures, which is not unusual for a vehicle in a suburban environment. We could provide a more accurate estimate, including using a distribution for the normal operating hours.

The 90% branch probability comes from the script for the OR action (“F.2 Failure?”). That selection results in the dialog box below.

Now if you assume a failure occurs approximately 10% of the time you can then determine the failure modes are probabilistic in nature, the paths need to be selected based on those probabilities. The second OR action (“F.3 Failure Mode?) shows three possible failure modes. You can add more by selecting F.3 and using the “+Add Branch” button. You can use this to add more branches to represent other failure modes, such as “Driver failure,” “Hit obstacle,” “Guidance System Loss,” etc.

Note to change the default names (Yes, No, Option) to the names of the failure modes, just double click on the name and a dialog will pop-up (as on right). Just type in the name you prefer.

To finish off this model add durations to the various other actions that may result from the individual failures. The collective times represent the impact of the failure on the driver’s time. Since you do not have any data at this time for how long each of these steps would take, just estimate them by using Triangular distributions of time (see sidebar below).

This shows an estimate from a minimum of ½ hour to a maximum of 1 hour, with the mean being ¾ hour. If you do this for the other actions, you can now execute the model to determine the impacts on time.

Note, you could also accumulate costs by adding a related Cost entity to each of the actions. Simply create an overall cost entity (e.g., “Failure Costs” and then decompose it by the various costs of the repairs. Then you can assign the costs to the actions by using a Hierarchical Comparison matrix. Select the parent process action (“F Vehicle Failure Process”) and use the Open menu to select the comparison matrix (at bottom of the menu). Then you will see a sidebar that asks for the “Target Entity,” which is the “Failure Costs” you just created. Then select the “Target Relationship,” which is only one “incurs” between costs and actions, then push the blue “Generate” button to obtain the matrix. Select the intersections of the between the process steps and the costs. This creates the relationships in between the actions and the costs. The result is shown below.

hiearchical comparison matrix

If you have not already added the values of the costs, you can do it from this matrix. Just select one of the cost entities and its attributes show up on the sidebar (see below).

Note how you can add distributions here as well.

Finally, you want to see the results of the model. Execute the model using the discrete event and Monte Carlo Simulators. To access these simulators, just select “Simulate” from the Action Diagram for the main process (“F Vehicle Failure Process). You can see the results of a single discrete event simulation below. Note that the gray boxes mean that those actions were never executed. They represent the rarer failure mode of an engine failure (assume that you change your oil regularly or this would occur much more often).

To see the impact of many executions by using the Monte Carlo simulator. The results of this simulation for 1000 runs is shown below.

As a result, you can see that for about a year in operation, the owner of this vehicle can expect to spend an average of over $1560. However, you could spend as much as over $3750 in a bad year!

For more detailed analysis, you can use the “CSV Reports” to obtain the details of these runs.

[1] From http://www.weibull.com/hotwire/issue46/relbasics46.htm accessed 1/18/2017

Ford vs. Mazda Transmissions: Why Does Quality Matter?

In the 1980’s Ford owned roughly 25% of Mazda (then known as Toyo Koygo). Ford had Mazda manufacture some automatic transmissions for cars sold in the United States. Both Ford and Mazda were building the same transmission off of the same specification and both had 100% specification conformance. However, the Ford transmissions were receiving more customer complaints about noise and were having higher warranty repair costs. This led Ford engineers to investigate and they found that the Ford manufactured transmissions utilized 70% of the available tolerance spread for manufactured parts, while Mazda used only 27% (AC 2012-4265: Promoting Awareness in Manufacturing Students of Key Concepts of Lean Design to Improve Manufacturing Quality). The Ford engineers began to realize that the Mazda transmissions were higher quality than the Ford manufactured ones. It turned out that Mazda was using a slightly more expensive grinding process than what Ford was using. This raised Mazda’s manufacturing costs, however the full lifetime costs were higher for the Ford manufactured transmissions.

This story is a prime example of why it is important to think about quality. Too often we tend to focus on other metrics and neglect quality, or we use a single metric to define quality. Ford experienced this by focusing on a “Zero Defect” policy, thinking that if there were zero defect in a transmission that would produce a quality transmission. Mazda expanded on this policy and took the whole lifecycle cost and experience into consideration as they developed their transmissions. With this holistic view, it is easy to see why engineers need to think about quality all across a program’s lifecycle.

Building Quality into The Lifecycle

If the goal of an organization is to deliver a quality product, engineers at all stages need to think about how they can add quality into the system. An easy way to think about how to add quality, is ask yourself: “What are the extra details, the extra effort, the extra care that can be put into the product?” When these extra efforts are applied to a properly defined system, the output is often a quality system. To a program manager all the extra effort sounds like a fair amount of extra cost. This is true, however it is important to weigh the short term cost increase against the potential long term costs savings. Below are two examples of how to add quality in the lifecycle.

Design

One of the first steps of the design effort is requirement building and unfortunately having a requirement like “system shall be of a quality design” does not cut it. Never mind that this requirement violates nearly all the good requirement rules, it fails to take into account the characteristics of a quality system. Is it the “spare no expense” engineering efforts of high end audio systems or is it the good quality for the price factor of Japanese manufactured cars in the 1970s? It is important to identify how the customer and market defines quality. Having this understanding informs choices going forward and prevents a scenario where the market doesn’t value the added quality efforts.

Procurement/Manufacturing

The procurement/manufacturing phase of the lifecycle is where quality efforts are the most visible. As parts are being ordered it is important to be thinking about how the whole supply chain thinks about quality. This involves reviewing the supplier’s suppliers to verify that the parts being delivered do not have a poor design or a possible defect that could be hidden through integration. For internally manufactured parts, is extra effort being added to check that the solder on pins is clean and will not short other sections under heating? Extra thought and care should be given to the human interface of the system, as this normally plays a major role in determining the quality of a system. For software, do user interfaces make sense, do they flow, are they visually appealing? These are the kinds of questions that should be asked to help guild engineers to building a quality system.

 “Quality Is Our Top Priority”

All too often I find a Scott Adams’ Dilbert comic strip that highlights a common problem that engineers face. In the comic below we have a perfect example of Pointy-Haired Boss directing Dilbert, Alice, and Wally to focus on quality.

Dilbert

DILBERT from Sunday March 28, 2004

 

What Pointy-Haired Boss fails to realize is that quality and the rest of his priorities are not mutually exclusive and can be done concurrently. A quality system is one that is safe, that is law abiding, and is financially viable. Quality should also be added to these factors, making sure that the extra bit of design work is worthwhile. All of these factors when properly combined together with good design and engineering produce a quality system.

By Daniel Hettema. Reposted from the SPEC Innovations’ blog with permission.

What is Model-Based Systems Engineering?

System engineering is the discipline of engineering that endeavors to perfect systems. As such, systems engineering is a kind of meta-engineering that can be applied across all complex team-based disciplines. The idea of systems engineering is to enhance the performance of human systems. It has more to do with the engineering of team performance, engineering meetings or political agreements. It is not quite a social science, but it is the science of perfecting human outcomes.

The International Council on Systems Engineering (INCOSE) has been the organizing body for systems engineering programs. The field of systems engineering has been emerging as a discipline with its own unique training and advanced degree programs. As of 2009, there are some 80 United States institutions offering undergraduate and graduate programs in systems engineering.

Systems-centric programs treat systems engineering as a separate discipline with a specific focus on a separately developed body of the theory and practice. Domain-centric programs are imbedded within conventional engineering fields. All programs are designed to develop the capability of managing and overseeing large scale engineering projects.

The field of systems engineering has developed its own unique set of tools and methodologies that have less to do with the physics and mathematics of the hardware of the project and more to do with the process of bringing the elements of the project together. Modeling software, refers to modeling the process of creation, not so much to models of what is being created. These tools enable members of project team to better collaborate and plan the process of creating a complex finished product, such as a space station or a skyscraper.

Innoslate software has developed as an all inclusive baseline tool for modeling and managing the system of engineering projects. It includes full collaboration systems that allow all team members to work from the same information base, to contribute new information and to have access to what has been accumulated. It has a rich vocabulary of diagramming and flow-charting media to illustrate, change, and embellish the engineering process over time. It has modalities to provide clear feedback about the growth of the project, where it has achieved goals and where it lags behind.

Reposted from SPEC Innovation with permission.