Data Analytics – Paving the Way for the Future of Digital Engineering

I recently attended the first Andrew P. Sage Senior Design Capstone Competition at George Mason University. This conference included student papers and presentations from GMU, West Point, University of Pennsylvania, US Naval Academy, Stevens Institute of Technology, and Virginia Tech. The conference is named for Andy Sage, who was the first Dean of Engineering at GMU and a prolific writer in the field of systems engineering. The students and faculty did him proud.

But perhaps the most impactful presentation on me was that of the keynote speaker, Dr. Kirk Borne. His topic was: “Using Analytics to Predict and to Change the Future.” He was coming at the problem from a “Big Data” point of view, beginning early in the presentation with the picture below talking about Zettabytes of data from airline engines. That is 1 x 1021 bytes of data.

I have often noted that in systems engineering, particularly in the early concept development phase, I have a sparse dataset, not a large one. In cutting edge work, such as defense applications, we often have only basic research where the massive data from other systems may not relate well to the new concept. However, during the presentation, I found myself writing many notes to myself about how the same concepts work even for smaller datasets.

Then I realized that we are already applying these kinds of techniques to Innoslate, as a result of applying natural language processing (NLP) to the information we are gathering and developing to create the system model.

For those new to NLP, Wikipedia defines it as “an area of computer science and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to fruitfully process large amounts of natural language data.” We currently use NLP in three of our Innoslate analytical tools: Requirements Quality Checker; Intelligence; and Traceability Assistant. The first two tools have been around for a while, but the Traceability Assistant is new with version 4.0.

If you are not familiar with the Requirements Quality Checker, it automates one of the more difficult problems in requirements management: knowing when you have good requirements. The picture below shows an example. The NLP algorithm assesses six of the eight quality attributes (Clear, Complete, Consistent, Design, Traceable, and Verifiable) shown on the sidebar below, and rolls them up into a quality score.

requirements management view and quality checker

We use this information to identify problems with the requirements and suggested fixes. Often those fixes are simple, such as forgetting a punctuation mark to complete the sentence or including a key verb (i.e., “shall”). You can always override the suggestion and select that it passes the test. All such changes are recorded in the History record for that entity.

Intelligence View also applies NLP technology against over 65 heuristics (i.e., rules of thumb) that represent best practices. Again, the NLP comes into play by looking at roots of words and comparing them, so it quickly recognizes that Wildfire and Wildfires are potentially the object. You can also select the “Fix” button and a window pops up letting you know what the problem is and helps you fix it (see right).

Finally, our newest application of NLP technology comes in the form of the Traceability Assistant. Innoslate’s Traceability Assistant is the “dream come true” for all of us who have been working with relational databases. The real challenge has been how to relate information between different classes of data. In fact, I was working on mapping two related policy documents the other day and went to my developers and asked, “Is there some way to automate this process of tracing requirements between documents?” Then they showed me what they were working on: The Traceability Assistant. They used the NLP technology to read the information contained in the name and description fields of every item for comparison and then determine if it is a match and how good a match it might be. In the example below, we can see different shades of green, where the darker green indicates a higher probability match. Now it’s just an algorithm and you may not agree with the conclusions, so you must put the “X” in the box, but the tool also shows the full name and description of the row and column entities so that you can make an informed decision. The best part is this works with any relationships between entity classes. So we can use this for functional allocation, as well as requirements traceability, and all the other connecting relationships. Can you imagine the productivity increase from this?

traceability assistant hierarchical comparison matrix

Innoslate also has a suspect assist, so that if relationships have been already created and reviewed, but then changes are made, it will help identify when the information entities should likely not be connected. This feature isn’t like many other tools provide, which show that you changed something, so all the information downstream is suspect. That can lead to someone cleaning up the grammar causing a major review down the entire chain. What a waste of time and energy. Innoslate’s Suspect Assistant highlights in shades of red the probability that traced entities should no longer be connected. It can also be used when you have done a set of manual connections to identify that not enough information has been provided in the name/description to validate a connection between the entities. This feature will then help you identify where you need to enhance the clarity between connected entities.

suspect assistant hierarchical comparison matrix

Both these tools are available in the traceability matrix diagram provided in Innoslate 4.0. Our commitment to the customer and application of emerging technologies, such as LML, cloud computing, and NLP technology, demonstrates that Innoslate is the tool for enabling 21st Century Digital Engineering.

 

The Future of Systems Engineering

 

I attended an interesting systems engineering forum this past week. A fair number of government organizations and contractors were participants. There were many interesting observations from this forum, but one presenter from a government agency said something that particularly struck me. He was saying that one of the major challenges he faced was finding people who were trained in UML and SysML. It made me think: “Why would it be difficult to find people trained in UML? Wasn’t UML a software development standard for nearly the last 20 years? Surely it must be a major part of the software engineering curriculum in all major universities?”

The Unified Modeling Language (UML) was developed in the late 1990s-early 2000s to merge competing diagrams and notations from earlier work in object-oriented analysis and design. This language was adopted by many software development organizations in the 2000s. But as the graph below shows, search traffic for UML has substantially declined since 2004.

This trend is reinforced by a simple Google search of the question: “Why do software developers not use UML anymore?”

It turns out that the software engineering community has moved on to the next big thing: Agile, which now systems engineers are also trying to morph into a systems engineering methodology, just like when they created the UML profile: Systems Modeling Language (SysML).

This made me wonder, “Do Systems Engineers try to apply software engineering techniques to perform systems engineering and thereby communicate better with the software engineers?” I suddenly realized that I have been living through many of these methodology transitions from one approach to software development and systems development to another.

My first experience in modeling was using flow charts in my freshman year class on FORTRAN. FORTRAN was the computer language used mostly by the scientific community in the 1960s through the 1990s. We created models using a standard symbol set like the one below.

Before the advent of personal computers these templates were used extensively to design and document software programs. However, as we quickly learned in our classes, it was much quicker to write-execute-debug the code than it was to draw by hand these charts. Hence, we primarily used flowcharts to document, not design the programs.

Later in life, I used a simplified version of this to convert a rather large (at the time) software program from the Cray computers to the VAX computers. I used a rectangular box for most steps and a rectangular box with a point on the side to represent decision points. This similar approach provided the same results in a much easier to read and understandable way. You didn’t have to worry about the nuanced notations or become an expert on them.

Later, after getting a personal computer (a Macintosh 128K) I discovered some inexpensive software engineering tools that were available for that platform. These tools were able to create Data Flow Diagrams (DFDs) and State Transition Diagrams (STDs). At that time, I had moved from being a software developer into project management (PM) and systems engineering (SE). So, I tried to apply these software engineering tools to my systems problem, but they never seemed to satisfy the needs of the SE and PM roles I was undertaking.

In 1989, I was introduced to a different methodology in the RDD-100 tool. It contained the places to capture my information and (automatically) produced diagrams from the information. Or I could use the diagrams to capture the information as well. All of a sudden, I had a language that really met my needs. Later CORE applied a modified version of this language and became my tool of choice. The only problem was no one had documented the language and went to the effort of making it a standard, so arguments abounded throughout the community.

In subsequent years I watched systems engineers argue between functional analysis and object-oriented methods. The UML camp was pushing object-oriented tools, such as Rational Rose, and the functional analysis tools, such as CORE. We used both on a very major project for the US Army (yes that one) and customers seems to understand and like the CORE approach better (from my perspective). On other programs, I found a number of people using a DoD Architecture Framework (DoDAF) tool called Popkin Systems Architect, which was later procured by IBM (and more recently sold off to another vendor). Popkin included two types of behavioral diagrams: IDEF0s and DFDs. IDEF0 was another software development language that was adopted by systems engineers while software developed had moved on to the object-oriented computer and modeling languages.

I hope you can now see the pattern: software engineers develop and use a language, which is later picked up by the systems engineering community; usually at a point where its popularity in the software world is declining. The systems engineering community eventually realizes the problems with that language and moves on. However, one language has endured. That is the underlying language represented in RDD-100 and CORE. It can trace its roots back to the heyday of TRW in the 1960s. That language was invented and used on programs that went from concept development to initial operational capability (IOC) in 36 months. It was used for both the systems and software development.

But the problem was, as noted above, there was no standard. A few of us realized the problems we had encountered in trying to use these various software engineering languages and wanted to create a simpler standard for use in both SE and PM. So, in 2012 a number of us got together and developed the Lifecycle Modeling Language (LML). It was based on work SPEC Innovations has done previously and this group validated and greatly enhanced the language. The committee “published” LML in an open website (www.lifecyclemodeling.org) so it could be used by anyone. But I knew even before the committee started that the language could not be easily enhanced without it being instantiated in a software tool. So, in parallel, SPEC created Innoslate®. Innoslate (pronounced “In-no-Slate”) provided the community with a tool to test and refine the language and to use it to map to other languages, including the DoDAF MetaModel 2.0 (DM2) and in 2014 SysML (included in LML v. 1.1.). Hence, LML provides a robust ontology for SysML (and UML) today. But it goes far beyond SysML. Innoslate has proven that many different diagram types (over 27 today) can be generated from the ontology, including IDEF0, N2, and many other forms of physical and behavioral models.

Someone else at the SE Forum I attended last week said something also insightful. They talked about SysML as the language of today and that “10 years from now there may be something different.” That future language can and should be LML. To quote George Allen (the long-departed coach of the LA Rams and Washington Redskins): “The Future is Now!”

 

Innoslate 4.0 Performance Benchmark

Soon we will announce a major advancement for the systems engineering community, Innoslate 4.0. Before we announce the exciting new features that Innoslate 4.0 has to offer, we wanted give a sneak peek at Innoslate 4.0’s performance improvements. Many common operations are up to twice as fast as Innoslate 3.9, with large operations averaging 1500% performance improvement.

All tests below were performed on an Intel Core i5-6500 CPU desktop at 3.20GHz with 4 Core(s), 256GB SSD, and 16GB of physical memory (RAM). The desktop utilized the Windows 10 operating system, Microsoft SQL Server 2016, and Innoslate Enterprise 4.0. The same physical machine was used to generate the results for each of the Innoslate versions.

Database Operations

Entities

Cold Start (s)

Save (s)

Deletion (s)

Innoslate

3.9

4.0

3.9

4.0

3.9

4.0

10,000

0.62

0.04

0.02

0.02

0.27

0.13

100,000

0.76

0.05

0.03

0.02

0.28

0.16

500,000

0.77

0.06

0.03

0.02

0.30

0.17

1,000,000

0.87

0.08

0.04

0.03

0.32

0.18

2,000,000

1.13

0.12

0.04

0.04

0.39

0.19

10,000,000

6.24

0.58

0.11

0.06

1.32

0.71

As indicated in the chart and table above, Innoslate 4.0 is up to 975% faster than Innoslate 3.9 at cold start (first interaction time) and approximately 83% faster at typical database operations when operating at scale.

User Interfaces

Entities

Database View (s)

Documents View (s)

Diagrams View(s)

Innoslate

3.9

4.0

3.9

4.0

3.9

4.0

10,000

0.25

0.18

0.05

0.04

0.05

0.03

100,000

0.28

0.21

0.06

0.04

0.05

0.03

500,000

0.28

0.27

0.08

0.06

0.07

0.05

1,000,000

0.29

0.28

0.11

0.08

0.09

0.06

2,000,000

0.39

0.31

0.12

0.09

0.10

0.08

10,000,000

1.20

0.93

0.88

0.42

0.95

0.40

As indicated in the chart and table above, Innoslate 4.0 is up to 137% faster than Innoslate 3.9 at typical user interface operations when operating at scale.

Search

Entities

Simple Search (s)

Complex Search  (s)

Innoslate

3.9

4.0

3.9

4.0

10,000

0.23

0.12

0.25

0.13

100,000

0.26

0.13

0.25

0.20

500,000

0.26

0.15

0.50

0.21

1,000,000

0.27

0.16

0.53

0.26

2,000,000

0.38

0.21

0.66

0.32

10,000,000

1.29

0.34

1.69

0.51

As indicated in the chart and table above, Innoslate 4.0 is up to 279% faster than Innoslate 3.9 at simple search queries and over 231% faster at complex search queries.

Reports

Entities in Report

Basic Tabular in 3.9 (s)

Basic Tabular in 4.0  (s)

 

10,000

72.32

3.05

24x Faster

50,000

102.93

5.82

18x Faster

100,000

247.05

14.91

17x Faster

As indicated in the chart and table above, Innoslate 4.0 is up to 1557% faster than Innoslate 3.9 at large report operations. Most complex reports have been migrated to backend services to decrease report generation time.

Conclusion

Innoslate 4.0 brings industry leading performance to the systems engineering community with proven scalability tested to 10 million entities in the same project using commodity hardware. Innoslate Enterprise is capable of further scalability with additional RAM and CPU resources. Additionally, Innoslate supports clustering and can elastically scale with application servers to multiple nodes.

Why MBSE Still Needs Documents

A lot of people are pushing Model-Based Systems Engineering (MBSE) in a way to just deliver models … and by models they mean drawings. The drawings can and should meet the criteria provided by the standards, be it SysML, BPMN, or IDEF. But ultimately as systems engineers we are on the hook to deliver documents. These documents (specifications) form the basis for contracts and thus have significant legal ramifications. If the specifier uses a language that everyone does not understand and only supplies drawing in the model they deliver, confusion will reign supreme. Even worse, if the tool does not enforce the standards and allows users to put anything on the diagram, then all bets are off. You can imagine that the lawyers salivate over this kind of situation.

But it’s even worse really, because not only are diagram standards routinely ignored, but so are other best practices, such as including a unique number on every entity in the database or a description of each entity. As simple as this sounds, most people ignore doing these simple things until later, if ever. This leads us to our first question:  1) Is a model a better method to specify a system?

This question requires us to look at the underlying assumption behind delivering models vs. a document. The underlying assumption is that the model provides a better communication of the complete thoughts behind the design so that the specification is easier to understand and execute. Which leads us to the next question: 2) Can a document provide the same thing?

Not if we use standard office software to produce the document. The way it is commonly done today is that someone writes up a document in a tool like MS Word and then that files is shipped around for everyone to comment on (using track changes naturally) and then all the comments are adjudicated in a “Comment Matrix.” Once that document is completed someone converts it to PDF (a simple “Save as …” in MS Word). In the worst case, someone prints the document and scans it into a PDF. Now we have lost all traceability or even the ability to hyperlink portions of the information to other parts of the design, making requirements traceability very difficult.

However, if you author your document in a tool like Innoslate, you can use its Documents View to create the document as entities in the database. You can link the individual entities using the built-in or user created relationships to trace to other database entities, such as the models in the Action Diagram, or Test Cases. This provides traceability to both a document and the models. In fact, the diagrams in Innoslate can be embedded in the document as well, thus keeping it live, reducing the configuration management problem inherent in the standard approach.

MBSE doesn’t mean the end of documents but using models to analyze data and create more informative documents. Using a tool like Innoslate lets you have the best of both worlds: documents and models in one complete, integrated package.

Professional Development Event Model-Based Systems Engineering

If you are in the Washington, DC Area, April 24-26, 2018, don’t miss the Professional Development Event Model-Based Systems Engineering training course from TSTI.TSTI logo.jpg

This course is intended for practicing systems engineers, payload principle investigators, subsystem engineers or project managers involved in any phase of the space mission life cycle who are curious about application of MBSE to their projects. Some basic understanding of systems engineering principles and processes is assumed.

The course is organized in a unique, modular format allowing you to choose the depth of training appropriate to your interest and time available. Sign up for one, two or all three days.

Want an overview of MBSE? Day 1: Builds a foundation for understanding why MBSE is useful and its overall value proposition for your projects.
Want to be an MBSE user? Day 2: Builds on day one and provides a deeper understanding for potential users of MBSE to explore what types of products and artifacts can be generated and what they can be used for in your projects.
Want to build system models? Day 3: Builds on days 1 and 2 and dives deeper into the details of MBSE languages and tools and challenges participants to build their own models from scratch. While the course uses a specific tool for teaching, the goal of the course is to be “tool agnostic” such that the basic principles can be applied to any tools that a person or project may use.

For details and registration, visit https://www.tsti.net/mbse/ or read the informational brochure. MBSE Virginia Course Flyer

When: Tuesday, April 24 –Thursday, April 26 9 a.m. – 5 p.m.
Where: Marriott Courtyard Dulles Airport, 3935 Centerview Drive, Chantilly, VA 20151
Cost: 3 Days – $1,600, 2 Days – $1,290, 1 Day – $980

It’s Time for Government to Embrace the Cloud Computing Revolution

We are sometimes our own worst enemies! We want something, but at the same time put up barriers to obtain what we want. A perfect example was at an Industry Day I recently attended. The customer had put out a request for information (RFI) and was holding a day to present what was going on with the program to the potential contractors. No procurement was discussed, only information about how they wanted to implement model-based systems engineering (MBSE). In particular they wanted to know what kind of contacting language should be used to provide better requests for proposals (RFP). However, they also said that we could not have one-on-one technical conversations with the government technical personnel. I call that a “self-inflicted denial of service attack.”

Cloud computing is the most common self-inflicted denial of service we encounter. We are all familiar now with DNS (Domain Name System) attacks. They seem to be a frequent occurrence and it’s frustrating when we can’t get on our favorite website because a troll has attacked it.

Because of these trolls and all their attack vectors, many in government have resisted adopting cloud computing. They think: “clouds are dangerous … I don’t have control over my data … someone might steal it.” All the while, their corporate networks have been hacked by every major player in the world. If someone hacks into your corporate network, everything they get is related to your organization and what it does. In other words, everything they get is gold. But isn’t cloud computing, as provided by large providers like Amazon, Google, and Microsoft, more secure than your corporate networks?

Let’s take Google for example. First, they don’t tell anyone the location of their data centers. They provide complete physical security. They build all their own servers from scratch and destroy them when they have finished their useful life. They have all the firewalls and software detection capabilities needed and more. They encrypt the data at rest (and you should be sending encrypted data via HTTP, at least). They randomize the filenames, so you need a map to find anything. The meet and exceed the FedRAMP requirements.

Does your corporate (or government) network do all that? Probably not. An Amazon Web Services’ representative explained to me, “FedRAMP requires over 200 security controls, we have over 2,000 of them.” The last thing anyone from these major “public” cloud providers want is some hacker successfully penetrating their network and capturing critical user data. They could (and would) be sued.

I was talking to a gentleman from the government about cloud computing the other day and he told me, “No one has ever told me how they can clean up a spill on the cloud.” [For those not in the know, a “spill” is when you accidentally put information somewhere it doesn’t belong.] I did not have the presence of mind at the time, but I should have asked “what do you do now with your enterprise e-mail system?” I can guarantee they do not go around tracking down backup and destroying hard drives. Deleting the data results in it being written over hundreds of times in a matter of minutes.

So, it’s time to stop committing denial of service attacks on ourselves. It’s time to embrace the cloud computing revolution and get on-board. The commercial world already did this for the most part half a decade ago. If we want to speed up and improve government, they need to figure out how to use the cloud now.

How to Choose the Right MBSE Tool

Find the Model-Based Systems Engineering Tool for Your Team

A model-based systems engineering tool can provide you with accuracy and efficiency. You need a tool that can help you do your job faster, better, and cheaper. Whether you are using legacy tools like Microsoft Office or are looking for a MBSE tool that better fits your team, here are some features and capabilities you should consider.

Collaboration and Version Control

It’s 2018. The MBSE tool you are looking at should definitely have built in collaboration and version control. You want to be able to communicate quickly and effectively with your team members and customers. Simple features such as a chat system and comment fields are a great start. Workflow and version control are more complex features but very effective. Workflow is a great feature for a program manager. It allows the PM to design a process workflow for the team that sends out reminders and approvals. Version control lets users work together simultaneously on the same document, diagram, etc. If you are working in a team of 2+ people, you need a tool with version control. Otherwise you will waste a lot of time waiting for a team member to finish the document or diagram before you can work on it.

Built in Modeling Languages Such as LML, SysML, BPML, Etc.

Most systems engineers need to be able to create uniformed models. LML encompasses the necessary aspects of both SysML and BPML. If you would like to try a simpler modeling language for complex systems, LML is a great way to do that. A built in modeling language allows you to make your models correct and understandable to all stakeholders.

Executable Models

A MBSE tool needs to be much more than just a drag and drop drawing tool; the models need to be executable. Executable models ensure accurate processes through simulation. Innoslate’s activity diagram and action diagram are both executable through the discrete event and Monte Carlo simulators. With the discrete event simulator, you will not only be able to see your process models execute, but you will able to see the total time, costs, resources used, and slack. The Monte Carlo simulator will show you the standard deviation of your model’s time, cost, and resources.

Easy to Learn

It can take a lot of time and money to learn a new MBSE tool. You want a relatively short learning curve. First, look for a tool that has an easy user interface. A free trial, sandbox, or account to get started with is a major plus. This let’s you get a good feel for how easy the tool is to learn. Look for tools that provide free online training. It’s important that the tool provider is dedicated to educating their users. They should have documentation, webinars, and free or included support.

Communicates Across Stakeholders

Communication in the system/product lifecycle is imperative. Most of us work on very diverse teams. Some of us have backgrounds in electrical engineering or physics or maybe even business. You need to be able to communicate across the entire lifecycle. This means the tool should have classes that meet the needs of many different backgrounds, such as risk, cost, decisions, assets, etc. A tool that systems engineers, program managers, and customers can all understand is ideal. The Lifecycle Modeling Language (LML) is a modeling language designed to meet all the stakeholder needs.

Full Lifecycle Capability

A tool with full lifecycle capability will save you money and time. If you don’t choose a tool with all the features needed for the project’s lifecycle, you will have to purchase several different tools. Each of those tools can cost the same amount as purchasing just one full lifecycle MBSE tool. You will also have to spend money on more training since you will not be able to do large group training. Most tools do not work together, so you will have spend resources on integrating the different tools. This causes the overall project to cost a lot more. This is why Innoslate is a full lifecycle MBSE solution.

 

It’s important to find the tool that is right for your project and your team. These are just helpful guidelines to help you find the right tool for you. You might need to adjust some of these guidelines for your specific project. If you would like to see if Innoslate is the right tool for your project, get started with it today or call us to see if our solution is the good fit for you.

 

Why Do We Need Model-Based Systems Engineering?

MBSE is one of the latest buzzwords to hit the development community.

The main idea was to transform the systems engineering approach from “document-centric” to “model-centric.” Hence, the systems engineer would develop models of the system instead of documents.

But why? What does that buy us? Switching to a model-based approach helps: 1) coordinate system design activities; 2) satisfy stakeholder requirements; and 3) provide a significant return on investment.

Coordinating System Design Activities

The job of a systems engineer is in part to lead the system design and development by working with the various design disciplines to optimize the design in terms of cost, schedule, and performance. The problem with letting each discipline design the system without coordination is shown in the comic.

If each discipline optimized for their area of expertise, then the airplane (in this case) would never get off the ground. The systems engineer works with each discipline and balances the needs in each area.

MBSE can help this coordination by providing a way to capture all the information from the different disciplines and share that information with the designers and other stakeholders. Modern MBSE tools, like Innoslate, provide the means for this sharing, as long as the tool is easy for everyone to use. A good MBSE tool will have an open ontology, such as the Lifecycle Modeling Language (LML); many ways to visualize the information in different interactive diagrams (models); ability to verify the logic and modeling rules are being met; and traceability between all the information from all sources.

Satisfying Stakeholder Requirements

Another part of the systems engineers’ job is to work with the customers and end-users who are paying for the product. They have “operational requirements” that must be satisfied so that they can meet their business needs. Otherwise they will no longer have a business.

We use MBSE tools to help us analyze those requirements and manage them to ensure they are met at the end of the product development. As such, the systems engineer becomes the translator from the electrical engineers to the mechanical engineers to the computer scientists to the operator of the system to the maintainer of the system to the buyer of the system. Each speaks a different language. The idea of using models was a means to provide this communications in a simple, graphical form.

We need to recognize that many of the types of systems engineering diagrams (models) do not communicate to everyone, particularly the stakeholders. That’s why documents contain both words and pictures. They communicate not only the visual but explain the visual image to those who do not understand it. We need an ontology and a few diagrams that seem familiar to almost anyone. So, we need something that can model the system and communicate well with everyone.

Perhaps the most important thing about this combined functional and physical model is it can be tested to ensure that it works. Using discrete event simulation, this model can be executed to create timelines, identify resource usage, and cost. In other words, it allows us to optimize cost, schedule, and performance of the system through the model. Finally, we have something that helps us do our primary job. Now that’s model-based systems engineering!

Provides a Significant Return on Investment

We can understand the idea of how systems engineering provides a return on investment from the graph.

The picture shows what happens when we do not spend enough time and money on systems engineering. The result is often cost overruns, schedule slips, reduced performance, and program cancellations. Something not shown on the graph, since it is NASA-related data for unmanned satellites, is the potential loss of life due to poor systems engineering.

MBSE tools help automate the systems engineering process by providing a mechanism to not only capture the necessary information more completely and traceably, but also verify that the models work. If those tools contain simulators to execute the models and from that execution provide a means to optimize cost, schedule, and performance, then fewer errors will be introduced in the early, requirements development phase. Eliminating those errors will prevent the cost overruns and problems that might not be surfaced by traditional document-centric approaches.

Another cost reduction comes from conducting model-based reviews (MBRs). An MBR uses the information within the tool to show reviewers what they need to ensure that the review evaluation criteria are met. The MBSE tool can provide a roadmap for the review using internal document views and links and provide commenting capabilities so that the reviewers’ questions can be posted. The developers can then use the tool to answer those comments directly. By not having to print copies of the documentation for everyone for the review, and then consolidate the markups into a document for adjudication, we cut out several time-consuming steps, which reduce the labor cost of the review an order of magnitude. This MBR approach can reduce the time to review and respond to the review from weeks to days.

Bottom-line

The purpose for “model-based” systems engineering was to move away from being “document-centric.” MBSE is much more than just a buzzword. It’s an important application that allows us to develop, analyze, and test complex systems. We most importantly need MBSE because it provides a means to coordinate system design activity, satisfies stakeholder requirements and provides a significant return on investment.  The “model-based” technique is only as good the MBSE tool you use, so make sure to choose a good one.

Innoslate Cloud vs. Innoslate Enterprise

This article is meant to help you determine which of these solutions is best for you and your team.

 

Innoslate is offered as a cloud or on premise solution, Innoslate Cloud and Innoslate Enterprise, respectively. It can be difficult to decide which one is right for you. We’ll take you through the pros and cons of each one to help you make this decision.

 

Innoslate Cloud

Innoslate Cloud is flexible and affordable. You can get started quickly, since there’s no download required. It’s easy to share projects with reviewers that are not Innoslate users yet. The pricing plans can be monthly or yearly, so you only have to pay for what you need. Innoslate Cloud is hosted in data centers audited for ISO 27001 and SAS70, so security isn’t a problem.

 

Pros:

Scalability through the cloud

High availability

Flexible pricing plans

Affordable pricing

No download required

 

Cons:

No floating licenses offered

Less administration controls

Less options

 

Innoslate Enterprise

Innoslate Enterprise offers scalability and collaboration beyond the cloud and behind your firewall, with the heightened security of your own server. Innoslate Enterprise is available on unclassified (U/FOUO) and classified (SECRET) level government networks through NSERC.  Innoslate Enterprise is also available on Amazon AWS Marketplace. You can choose between floating and named licensing options. You can get started within seconds on a modern server hardware. The high performance and close proximity of Innoslate Enterprise removes latencies. The server, RedHat Wildfly, implements the latest enterprise JAVA standards with full JAVA EE7 support. Wildfly runs on Windows, Mac, and Linux servers.

 

Pros:

Massive scalability (10 million entities)

Increased administration control

More availability through NSERC and AWS

Floating licensing options available

Fewer latencies

 

Cons:

Less flexibility through pricing

More expensive than Innoslate Cloud

Download required

 

Innoslate Enterprise is perfect if you need your SaaS on your organization’s server or want more administration controls through LDAP. Some organization will have to choose Innoslate Enterprise due to strict security requirements. Check with your IT department to see if you are required to use an on-premise solutions.

 

Still not sure, which solution is right for you? Give us a call at 571.485.7800.

Difference in Floating vs. Named – What’s Right For Me?

Innoslate Enterprise has two licensing type options: Floating and Named. On a daily basis, we hear people ask us “what’s the difference and which one is best for my organization?” They both have different benefits and it’s important to understand what each one is in order to pick the best licensing type for your organization.

Floating Licensing – “a software licensing approach in which a limited number of licenses for a software application are shared among a larger number of users over time.”

 

We like to use the analogy of a family computer vs. a cell phone. Floating licenses are most similar to a family computer. More and more people have individual laptops and iPads, but not too long ago most families had a shared computer. Each family member had their own account. This allowed them a way to login with their own username and password. They could also save their own background, screensaver, files, and other preferences. In other words it would look and feel just like their own computer, but without having to buy each family member their own laptop. However, only one family member can use the computer at a time.

 

One floating licenses is just like that family computer. Multiple people can have a login and use Innoslate as if it were their own license, but only one at a time. This is a really great option if you have a large team and  maybe only a quarter will be using it at a time. For instance, if you have 100 engineers, but only 25 need to use a license at a time. Rather than buying 100 named licenses for each engineer, you could buy 25 floating licenses and save a lot of money.

 

Floating licenses are also a great option if you have a lot of employees joining or leaving a contract frequently. This option allows you to not have to be concerned with whose name is associated with the license. You can easily remove and add employees to a floating license.

 

Named Licensing – “an exclusive licensure of rights assigned to a single named software user. The user will be named in the license agreement.”

 

Back to the family computer vs. cell phone analogy. Named licensing is like a cell phone. Cell phones are not designed for sharing. There aren’t multiple logins. The saved preferences, downloads, and background will not change based on which family member is using the cell phone. They are meant for one person to use it always. Named licensing is exactly the same. Each named license is associated with a name.

 

A named license is cheaper than floating. You can also have free read only users. For example, say you purchased 1 named license. You can then invite as many people as you want to come review your project. They will be able to see the project you shared with them, leave comments, and chat. While if you have a floating license, a read only users consume a license for the duration they use the tool until they sign out.

 

If you know the team that will be using each named license and they will be using it together daily, then named licenses are right for you. It can also be the right option if you have a very large number of reviewers using the software daily.

When making the decision, floating or named, make sure to think about the following:

  • How many concurrent users do we have?
  • Do we have a lot of employee changeover?
  • How many total engineers/requirements managers will use the software daily?
  • How many reviewers will be using the software daily?
  • What is our budget?

Still not sure? Talk to us. We’d be happy to help you decide which option is best for you.

Read next: Innoslate Enterprise vs. Innoslate Cloud – What Is Right For Me?