The Future of Systems Engineering

 

I attended an interesting systems engineering forum this past week. A fair number of government organizations and contractors were participants. There were many interesting observations from this forum, but one presenter from a government agency said something that particularly struck me. He was saying that one of the major challenges he faced was finding people who were trained in UML and SysML. It made me think: “Why would it be difficult to find people trained in UML? Wasn’t UML a software development standard for nearly the last 20 years? Surely it must be a major part of the software engineering curriculum in all major universities?”

The Unified Modeling Language (UML) was developed in the late 1990s-early 2000s to merge competing diagrams and notations from earlier work in object-oriented analysis and design. This language was adopted by many software development organizations in the 2000s. But as the graph below shows, search traffic for UML has substantially declined since 2004.

This trend is reinforced by a simple Google search of the question: “Why do software developers not use UML anymore?”

It turns out that the software engineering community has moved on to the next big thing: Agile, which now systems engineers are also trying to morph into a systems engineering methodology, just like when they created the UML profile: Systems Modeling Language (SysML).

This made me wonder, “Do Systems Engineers try to apply software engineering techniques to perform systems engineering and thereby communicate better with the software engineers?” I suddenly realized that I have been living through many of these methodology transitions from one approach to software development and systems development to another.

My first experience in modeling was using flow charts in my freshman year class on FORTRAN. FORTRAN was the computer language used mostly by the scientific community in the 1960s through the 1990s. We created models using a standard symbol set like the one below.

Before the advent of personal computers these templates were used extensively to design and document software programs. However, as we quickly learned in our classes, it was much quicker to write-execute-debug the code than it was to draw by hand these charts. Hence, we primarily used flowcharts to document, not design the programs.

Later in life, I used a simplified version of this to convert a rather large (at the time) software program from the Cray computers to the VAX computers. I used a rectangular box for most steps and a rectangular box with a point on the side to represent decision points. This similar approach provided the same results in a much easier to read and understandable way. You didn’t have to worry about the nuanced notations or become an expert on them.

Later, after getting a personal computer (a Macintosh 128K) I discovered some inexpensive software engineering tools that were available for that platform. These tools were able to create Data Flow Diagrams (DFDs) and State Transition Diagrams (STDs). At that time, I had moved from being a software developer into project management (PM) and systems engineering (SE). So, I tried to apply these software engineering tools to my systems problem, but they never seemed to satisfy the needs of the SE and PM roles I was undertaking.

In 1989, I was introduced to a different methodology in the RDD-100 tool. It contained the places to capture my information and (automatically) produced diagrams from the information. Or I could use the diagrams to capture the information as well. All of a sudden, I had a language that really met my needs. Later CORE applied a modified version of this language and became my tool of choice. The only problem was no one had documented the language and went to the effort of making it a standard, so arguments abounded throughout the community.

In subsequent years I watched systems engineers argue between functional analysis and object-oriented methods. The UML camp was pushing object-oriented tools, such as Rational Rose, and the functional analysis tools, such as CORE. We used both on a very major project for the US Army (yes that one) and customers seems to understand and like the CORE approach better (from my perspective). On other programs, I found a number of people using a DoD Architecture Framework (DoDAF) tool called Popkin Systems Architect, which was later procured by IBM (and more recently sold off to another vendor). Popkin included two types of behavioral diagrams: IDEF0s and DFDs. IDEF0 was another software development language that was adopted by systems engineers while software developed had moved on to the object-oriented computer and modeling languages.

I hope you can now see the pattern: software engineers develop and use a language, which is later picked up by the systems engineering community; usually at a point where its popularity in the software world is declining. The systems engineering community eventually realizes the problems with that language and moves on. However, one language has endured. That is the underlying language represented in RDD-100 and CORE. It can trace its roots back to the heyday of TRW in the 1960s. That language was invented and used on programs that went from concept development to initial operational capability (IOC) in 36 months. It was used for both the systems and software development.

But the problem was, as noted above, there was no standard. A few of us realized the problems we had encountered in trying to use these various software engineering languages and wanted to create a simpler standard for use in both SE and PM. So, in 2012 a number of us got together and developed the Lifecycle Modeling Language (LML). It was based on work SPEC Innovations has done previously and this group validated and greatly enhanced the language. The committee “published” LML in an open website (www.lifecyclemodeling.org) so it could be used by anyone. But I knew even before the committee started that the language could not be easily enhanced without it being instantiated in a software tool. So, in parallel, SPEC created Innoslate®. Innoslate (pronounced “In-no-Slate”) provided the community with a tool to test and refine the language and to use it to map to other languages, including the DoDAF MetaModel 2.0 (DM2) and in 2014 SysML (included in LML v. 1.1.). Hence, LML provides a robust ontology for SysML (and UML) today. But it goes far beyond SysML. Innoslate has proven that many different diagram types (over 27 today) can be generated from the ontology, including IDEF0, N2, and many other forms of physical and behavioral models.

Someone else at the SE Forum I attended last week said something also insightful. They talked about SysML as the language of today and that “10 years from now there may be something different.” That future language can and should be LML. To quote George Allen (the long-departed coach of the LA Rams and Washington Redskins): “The Future is Now!”

 

Quick Guide to Innoslate’s Ontology

Innoslate uses the Lifecycle Modeling Language (LML) ontology as the basis for the tool’s database schema. For those new to the word “ontology,” it’s simply the set of classes and relationships between them that form the basis for capturing the information needed. We look at this in a simple Entity-Relationship-Attribute (ERA) form. This formulation has a simple parallel to the way we look at most languages: entities represent nouns; relationships represent verbs; attributes on the entity represent adjectives; and attributes on relationships represent adverbs.

LML contains twelve (12) entity classes and eight (8) subclasses. They represent the basic elements of information needed to describe almost any system. The figure below shows how they can be grouped to create the models needed for this description.

Most of these entity classes have various ways to visualize the information, which are commonly called models or diagrams. The benefit of producing the visualizations using this ontology means that when you create one model, other models that use the same information will automatically have that information available.

All these entities are linked to one another through the relationships. The primary relationships are shown below.

 

This language takes a little getting used to, like any other language. For example, you might be used to referring to something functional as a Function or Activity. These are both “types” of Actions in LML and implemented as labels in Innoslate. Similarly, you may be used to using different relationship names for parents and children for different entity classes. However, by using the same verbs for the parent-child relationships you can avoid confusion in having to remember all the different verbs.

You still might need other ontological additions. LML was meant to be the “80% solution.” You should look very closely at the ontology, as often you only need to add types (labels) or an attribute here and there. Hopefully, you will rarely need to add new classes and relationships. If you do add new classes, try to do so as subclasses to existing ones, so that you inherit the diagrams as well. For example, when the Innoslate development team added the new Test Center, they decided they needed to extend the Action class. This enables the TestCase class to inherit the Action class and other functional diagrams, as well as the status, duration, and other attributes that were important.

Hopefully, you can see the benefits of using LML as the basis for Innoslate’s schema. It was designed to be:

  • Broad (covers the entire lifecycle – technical and programmatic)
  • Ontology-based (enables translation from LML to other languages and back)
  • All the capabilities of SysML (with LML v1.1 extensions) and DoDAF
  • Simple structure
  • Useful for stakeholders across the entire lifecycle

For more information, see www.lifeyclemodeling.org and visit the Help Center at help.innoslate.com.

Innoslate’s Ontology Webinar

Live Webinar June 6th at 2:30 pm EST

Everyone talks about “data-centricity,” but what does that mean in practical terms. It means that you have to have a well defined ontology that can capture the information needed to describe the architecture or system you work with or want to create. An ontology is simply the taxonomy of entity classes (bins of information) and how those classes are related to each other.

You’ll learn a relatively new ontology, the Lifecycle Modeling Language (LML). LML provides the basis for Innoslate’s database schema. In this webinar, we will discuss each entity class and why it was developed. Dr. Steven Dam, who is the Secretary of the LML Steering Committee, will present the details of the language and how it relates to other ontologies/languages, such as the DoDAF MetaModel 2.0 and SysML. He will also discuss the ways to visualize this information to enhance understanding of the information and how to use that information to make decisions about the architecture or system.

Join us live on July 6th at 2:30 pm EST.

After July 6th 2017, watch the recording here.

Do Frameworks Really Work?

Over the past 20 years, we have been using various “Frameworks” to capture information about architectures and systems. Beginning perhaps with the Zachman Framework, which is still one of the most successful. The Zachman Framework is timeless for this reason: it forced us to think about the standard interrogatives (Who, What, When, Why, Where, and How) and different perspectives (Executive, Business Management, Architect, Engineer, and Technician) as applied to the enterprise. The idea was to build models in each area, starting with simple lists and working into detailed models of the systems at a component level.

Other frameworks followed, included the DoD Architecture Framework (DoDAF – which came from the C4ISR Architecture Framework), the Ministry of Defence Architecture Framework (MODAF), the NATO Architecture Framework, and many others. All these Frameworks were built on the idea that the information could be easily “binned” into these boxes, but as we know many of the models in these Frameworks included maps between different data elements, such as the CV-6 from DoDAF that maps Capabilities to Operational Activities. Recently, the Object Management Group (OMG) and others have been trying to push out a Unified Architecture Framework (UAF) that combines the models from many of these other frameworks into a single, very large framework. OMG is also responsible for developing the Unified Modeling Language (UML), Systems Modeling Language (SysML), and the Unified Profile for DoDAF and MODAF (UPDM).

All of these framework and languages are ways to capture information about a system or enterprise and show them as a set of data, often as a picture or model with two or more pieces of information in the model. An example is the DoDAF OV-5a, which is a functional hierarchy of Operational Activities, which only contains a single entity class and the decomposition relationship between those entities. Another example is the OV-5b, which shows the Operational Activities and the “Resource Flows” (which many of us recognize as inputs and outputs to/from the Operational Activities) between them. Thus, we have now two entity classes and the associated relationships between them. Obviously, the information in the CV-6, OV-5a, and OV-5b overlap in that the same Operational Activities need to show up in each of these models. But how many of these different models would we need to complete describe a system or enterprise?

An alternative way to capture information is to use an ontology (and the DoDAF 2.02 is based on a rather large ontology – the DoDAF MetaModel 2.0 or DM2) that captures the information in a finite number of classes (or bins) and a set of relationships between these classes. The classes and relationships can have attributes associated with them, which are also pieces of information. At the ontology level, all we see is a mass of data, so most of us want to see the pictures, and most architects and system engineers seem to prefer this approach.

An alternative to the DM2 is the Lifecycle Modeling Language (LML), which contains both an ontology and a set of diagrams to capture the necessary technical and program information associated with a project. This language uses an ontology that appears simpler than the DM2, but actually hides the complexity through the large number of relationships between entities and the fact that the relationships can have attributes associated with them. LML purports to be the 80% solution to the ontology, meaning that you may decide to extend it to meet your specific needs. But let’s just stick with it. LML has 20 classes of entities, and each entity class has a number of relationships associated with it (over 40 in total). So, if we ignore the attributes, we have over 20(20-1)/2=190 combinations of information possible. Does that mean I need 190 or more diagrams to visualize all this information, perhaps – it could be less, but it could be more. Can we really have that many different diagrams to represent this information, which is what a Framework would require? And of course, if we add in all the attributes to both the classes and relationships, then we are trying to display a lot more information than this.

So, Frameworks are a useful starting point, which is how John Zachman uses his framework, and it may be enough for enterprise architecture, but it’s not a panacea for all of systems engineering problems. Sooner or later as we want to decompose the system to a level we can determine what to buy or build, as well as manage these kinds of projects, you likely will need to use a more robust approach. LML and this kind of ontology makes it much easier to capture, manage, and visualize the information. See for yourself. Go to www.innoslate.com and try out the tool that uses LML for free. Explore the Schema Editor to see the entire ontology. Play with the “Getting Started” panel examples. I think once you do you will find this approach works much better than the Frameworks. In addition, Innoslate has a “DoDAF Dashboard” that enables you to create DoDAF models directly from the dashboard, so if that’s what you are more familiar with, you will find it the easiest to get started. Notice that may of the other projects are automatically populated with the information from the other models. That’s because Innoslate reuses that information to automatically create the other views!