Brainstorming on Demand
Technical feasibility is usually based on verified data and should contain sufficient information so that the evaluation may be made into financial statements. In other words, technical feasibility tends to review existing technology in well known conditions. Sometimes, we need to make a decision about future functions of a product or possible environment, which is not always obvious. Guessing may help, the only question is "how?"
What do we normally do when we have no information about something? We begin by thinking, this may be with a group of people. In a technical world it is also called Joint Application Development or a JAD session, that sometimes resemble brainstorming. There is nothing wrong with this creativity boosting method, the only problem brainstorming has no sign where to go next on the "idea crossroad."
The alternative is TRIZ, also known as Theory of Inventive Problem Solving, that is based on a strict algorithmic approach. TRIZ was created about 60 years ago and originally aimed to solve engineering tasks. Among other algorithms, TRIZ contains so called "thinking inertia overcome methods." Those methods, where "reduction - extension" is one of them and explained below, were specifically created to overview a problem from different angles. Real usage adopts about ten of these methods, and it is possible and even recommended to mix them all when observing a system.
Before we continue, let us review couple of examples:
Two Mars rovers have been working for a second year now instead of 90 days as it was planned before. Several on-board software updates have happened during these years. "Spirit" and "Opportunity" are sending red planet's landscape photos, so we can also consider rovers as remote image editors with the extremely slow connection speed with end-users.
The Microsoft Office family initially was created initially as single user software. Historically, authors wrote the content first and then show it to an editor. No one expected that the new computer era would allow text to be revised by several people simultaneously. Today we can. We are also able to share our desktop within a group, use more than one monitor at the time and send our application to a remote workstation.
In other words, the software is working longer than expected, the number of users is growing contradictory to preliminary expectations, the increasing amount of data between remote applications puts a connection speed behind all the time. Those examples are numerous. What the cost be, a software company pays, if features like these were not researched and discovered? In most cases only refactoring can help. What can technical feasibility analysis do here?
How it works?
Engineers have been solving similar tasks since the 19 century. Applying the same idea to IT industry we can find out that the process itself is very simple, pick 5 to 10 of the most important system parameters, draw a matrix, put zero values for these parameters in one side of the matrix and infinite values of these parameters in the other side. This is the first step. For example, the number of end users, the amount of data, the connection speed, the response time and the lifetime are usually important. One side of the matrix will be filled with zero users, no data, no connections with any applications, no system feedbacks and modified constantly (that means a system lives for no time). The other axis consists of infinite values. The question is ? "infinite" - how much is it? The greater value is the better. If the expected number of users will be about 100, take 1 million. If a predicted application feed is about several kilobytes extend this value to 1Gb.
Correctly selected parameters may be used separately. In our hypothetical example, zero number of users (i.e. operators) and 1Gb of data, is typical for an embedded system. Conversely, 1 million users and no data stored may be typical of on-line Macromedia Flash game.
When everything is done it is a time to choose any cell and analyze how the parameters may affect the system. We are now ready take the second step. Let us assume that a system under consideration is a word processor. Try to apply a single parameter with different values: such as word processor with no users, word processor with infinite number of users, word processor with no data stored as a result of a user?s work, and word processor with several 1Gb size files stored at once to save a single user work. The third step is to use more than one parameter; such as a large number of users working with the word processor, assuming the response time of the processor is about a month, and the lifetime of this version of the processor is more than a decade. The fourth step is to select and analyze the most realistic scenarios.
This looks good so far, but there are two pitfalls. The result will strictly depend on parameters. If something important was overlooked, the analysis would not be accurate. Therefore, interpretation of results should not go alone, micro- and macro-systems should be involved into the consideration. As a matter of fact, some results may lead to innovative solutions that may not be discovered during a simple ideas generation.
Shamil Nizamov is freelance writer based in Vancouver. The area of interests include E-Commerce software development, TRIZ learning, creativity processes research. You can discuss the article in a forum - http://www.triz-guide.com
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home