• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/129

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

129 Cards in this Set

  • Front
  • Back
1. Summarize the distinctions between the analysis phase and the design phase of the SDLC.
The difference between the two phases is that analysis focuses on determining what the business needs are, whereas the design phase takes those business needs and determines how they will be met through a specific system implementation. The analysis phase includes activities designed to discover and document the features and functions the system must have. In the design phase, those features and functions should not change (much). The focus in design is to figure out how to create a system technically that will provide all those needed features and functions.
1. Describe the primary activities of the design phase of the SDLC.
There are many activities that are performed during the design phase, but the specific ones that are necessary are determined once the team has decided upon the best design strategy for the project. The design strategy options are: build the system in-house; purchase a pre-written software package, or hire an outside firm to do the development. Assuming the design strategy is to build the system in-house, then the team will have a myriad of design activities to perform Their primary goal is to develop physical models of the new system that document how it will perform the functions outlined in the Analysis phase. These physical models will represent the new system’s design before the system builders start constructing it. Included in this work will be converting the logical DFDs and ERDs to physical diagrams, planning the integration of the new system with existing systems, making technology architecture decisions, and designing all system components (user interface, input, output, programs, files and databases).
1. List and describe the content of the system specifications.
This document includes the physical process models, physical data model, architecture report, hardware and software specification, interface design, data storage design, and program design. The system specification conveys exactly what the project team will implement.
1. Describe the three primary strategies that are available to obtain a new system.
The three primary strategies for obtaining a new system are custom development (the company develops the system in house using corporate resources), packaged systems (purchasing a system off the shelf), and outsourcing (hiring an external developer, vendor or application service provider to create or supply the system).
1. What circumstances favor the custom design strategy?
The custom development strategy is appropriate when several conditions are met. First and foremost, there should be a unique business need which is unable to be fulfilled by a purchased, pre-written solution. Second, there should be functional, technical, and project management skills available in-house, and there should be a desire to build and enhance these skills. The organization should have a proven track record of development, and should have an established systems development methodology. Finally, the project timeframe should be flexible enough to accommodate the uncertainty of a custom development project.
1. What circumstances favor the use of packaged software?
Packaged software is an excellent design strategy when the organizational business functions are fairly common and/or the time to implement requirement is short. Accounting practices, inventory control, customer record keeping are all examples of common business functions. If the organization has no specialized business needs, the packaged software option should be the first consideration. The organization still has need of personnel with functional experience and project manager experience to facilitate the integration; however, highly technical developers are not required with the integration of packaged software. Additionally, the time frame to acquire this type of software can be extremely short as vendors can acquire the software with little or no delay.
1. What circumstances favor using outsourcing to obtain the new system?
Outsourcing is a good design strategy when the organization does not have the experience or resources itself to do the development in-house, or when it wishes to focus its own resources on other more strategic efforts, and wishes to let an outsourcer handle a less strategic project. It is not a good idea to outsource projects of high strategic value to the organization because the organization does not enhance its own capabilities if the work is outsourced. A very capable project manager is needed to help ensure the success of an outsourcing arrangement.
1. What are some problems associated with using packaged software? How can these problems be minimized?
There are two primary problems associated with purchasing pre-written software. First, the software is generally written to appeal to the widest possible market. Its features are likely to be quite generic, and may not fit the procedures of the purchasing company very well. The purchasing company will have to adapt to the software’s features, however. Second, the software has to be integrated into the organization’s existing systems environment. Often, data formats are quite different between the new package and existing legacy systems. To handle these problems to some degree, the purchasing organization can do some customization on the package, or develop workarounds.
1. What is meant by customizing a software package?
Customizing a software package means that the organization takes advantage of customizable features built into the software package they have acquired. This can include adding corporate logos, or changing default behavior. The amount of customizable features is application specific, so the developers should have an understanding of what features will be customizable during the acquisition decision process.
1. What is meant by creating a workaround for a software package? What are the disadvantages of workarounds (if any)?
A workaround is a custom-built add-on program that interfaces with a packaged software application to provide special functionality. Workarounds can be useful in adding a few special features to a pre-written software package, making it more suited to the organization’s particular needs. The two disadvantages of workarounds are: (1) it will not be supported by the software vendor, and upgrades to the package by the vendor may cause problems with the workaround; and (2) the vendor may point to the workaround if any problems occur with their product.
1. What is involved with systems integration? When is it necessary?
Packaged software solutions are frequently available that provide acceptable solutions to business needs. Companies are satisfied with the features available and value the time and cost savings. These packages must be integrated into the existing environment of legacy systems and other software packages. Systems integration addresses the data integration issues that become critical in order to combine systems from various sources.
1. Describe the role of application service providers (ASPs) in obtaining new systems. What are their advantages and disadvantages?
ASPs are a relatively new type of outsourcing. ASPs provides the employees of the company access (via the Internet, either by a website or a thin-client), to fairly standard applications (Office, accounting, inventory) that the company pays for either on a per-use basis or as a monthly fee. Advantages to outsourcing to an ASP include a short set up time for getting the software in use, a low initial outlay, no software maintenance costs, no infrastructure changes to be made. Disadvantages are similar to those for purchasing a software package; specific business needs by not be met by the generic software, customization is minimal, and in this case, workarounds would not be available as the software is not on site.
1. Explain the distinctions between time and arrangements, fixed-price, and value-added outsourcing contracts. What are the pros and cons of each?
With time and arrangements, the organization pays for whatever time and expenses are incurred to complete the project. The actual final cost of the project will not be known until it is over. With a fixed price contract, the organization pays a set contractual fee for the work. A value-added arrangement usually involves a small initial cost to the organization, but the outsourcer shares in the benefits of the system as additional compensation. In this case, the organization trades off higher initial costs for a lower return over the life of the system.
1. What is the purpose of a request for proposal (RFP)? How does it differ from the RFI?
The RFP (Request for Proposal) is a document that is used to communicate an organization’s systems needs to a vendor or other provider who may be able to respond to those needs. The RFP initiates communication between the two organizations. RFPs are generally very lengthy and detailed, and the communication that takes place is quite formal. The Request for Information (RFI), on the other hand, is usually shorter and less detailed than an RFP. The RFI indicates that the organization is looking for information, and the vendor is free to respond with that information in a much less formal way.
1. What information is typically conveyed in an RFP?
The RFP expresses in detail the needs and requirements of the organization, and specifies the process it will use to evaluate potential vendors. The vendor can then create a proposal that details how it would be able to fulfill the stated requirements, and what its fees would be.
1. What is the purpose of the weighted alternative matrix? Describe its typical content.
The alternative matrix provides a concise framework to summarize alternatives, such as various software packages. Adding weights and scores to the matrix allow the decision maker to prioritize the criteria. Typically, analysts take 100 points and distribute the points to the various criteria depending upon how important the criterion is deemed to be. The assignment of points is entirely subjective. Once the points have been assigned, the analyst then awards a score (1-5), of how well each alternative meets that criterion. The weighted score is the weighted points assigned multiplied by the score received. Each criterion’s weighted score is calculated, and the weighted scores summed. The alternative with the highest score would indicate the best match for the criteria.
1. Should the analysis phase be eliminated or reduced when we intend to use a software package instead of custom development or outsourcing?
The analysis phase is very important, even if the design strategy chosen is packaged software. It is critical to understand the business requirements for the problem domain so that the various packaged solution options can be accurately evaluated. The business requirements should drive the evaluation of the packaged software options. We do not want the features of available software packages to determine what is needed to solve the business problems.
1. List and describe the four primary functional components of a software application.
The four general functions of any application are (1) data storage - storage of the system’s data; (2) data access logic - providing access to the system’s data; (3) application logic - the system’s processing functions; and (4) presentation logic - the appearance of the system to the user and the method used to give the system commands.
1. List and describe the three primary hardware components of a system.
The three main hardware components of an application architecture are servers, clients, and networks. Servers are the computers that provide shared resources (files, applications, data, etc.). Clients are the devices used by the end users, usually a PC. Networks are the communication structures that enable the clients and servers to exchange messages and information.
1. Explain the server-based architecture.
A server is a computer, usually a mainframe or minicomputer, that performs all application functions and provides that data upon a client’s request. A file server is usually a personal computer (PC) in a local area network (LAN) that provides access to programs and data upon a client’s request.
1. Explain the client-based architecture.
In a client-based architecture the clients are responsible for the presentation logic, the application, and the data access logic. The server simply stores the data.
1. Explain the client- server architecture.
In a client-server based architecture the responsibility for the applications functions are shared. The client is responsible for the presentation logic, whereas the server is responsible for the data access and data storage. The application logic may be split between the client and server, or may reside on the client or the server.
1. Compare and contrast server-based architectures, client-based architectures and client-server based architectures.
In a server-based architecture, the server (generally a large mainframe) performs all four application functions. In a client-based architecture, the client computers (all PCs) handle the presentation logic, the application logic, and the data access logic. The server computer (also a PC) just stores the data. In a client-server architecture, the clients handle the presentation logic, while the server is responsible for data access logic and data storage. The application logic may be allocated entirely to the clients, entirely to the server, or split between the clients and server.
1. Describe the differences among two-tiered, three-tiered, and n-tiered computing.
The differences in these client-server architectures have to do with the allocation of the various components of the total application between the client and the server(s). In the two-tiered form, the server is responsible for the data and the data access logic, and the client handles the application logic and presentation logic. In the three-tiered form, the client handles the presentation logic; one server handles the application logic; and a third server handles the data storage and the data access logic. In an n-tiered client-server form, the client handles the presentation logic; one or more servers handle the application logic; and one or more servers handle the data storage and data access logic.
1. What is meant by the term scalable. What is its importance in architecture selection?
Scalability is the ability to increase or decrease the storage and processing capacity with ease. This attribute is important to system developers because it may be difficult to accurately predict the demands that exist in a particular computing environment. If the team underestimated demand, it will be easy and relatively inexpensive to increase needed capacity. If the team overestimated demand, it will be easy to reduce the capacity of the system and perhaps make better use of the resources elsewhere.
1. Explain the six criteria that distinguish the computing architecture options.
Cost of infrastructure – cost of hardware, software, and supporting network
Cost of development – cost of developing application
Difficulty of development – complexity of application and development tools
Interface capabilities – command line, graphical user interface (GUI), web-based
Control and Security – degree of security required per nature of the system
Scalability – ability to increase or decrease capacity of the system
10. What is meant by the total cost of ownership? How does this factor affect the choice of architecture?
Total cost of ownership (TCO) refers to recognizing that the cost of developing a system does not only include hardware and software. TCO includes additional costs such as the cost of technical training, maintenance agreements, extended warrantees, and licensing agreements.
10. Describe the major nonfunctional requirements and how they influence architecture design
The major nonfunctional requirements are operational, performance, security, and cultural/political. Operational requirements specify the operating environment for the system, and include issues regarding the technical environment, integration with other systems, portability and maintenance. These requirements (especially the technical environment requirements) have the most impact on the architecture design.

Performance requirements include uses such as response time, capacity, and reliability. Security requirements deal with protection from disruptions and data loss. Cultural/political requirements include issues specific to the particular countries in which the system will be used. These nonfunctional requirements do not have as much impact on the architecture design as the operational requirements, but may still be important if the operational requirements do not strongly suggest a specific architecture design.
10. Describe the types of performance requirements and how they may influence architecture design.
Performance requirements include:
Speed – response time (how long does it take for the system to respond to a user request?), and transaction delay (how long does it take for an event on one part of the system to be reflected in another part of the system?)

Capacity – how many users does the system support? The number of both internal (employees) and external users (customers) should be factored in.

Availability and Reliability – When does the system need to be available? 24x7? During the 40-hour work week only? Is it absolutely imperative that the system be up and running with no downtime? For medical and military operations, this may be the case.
10. Describe the types of security requirements and how they may influence architecture design.
Security requirements include:
System Value – estimated business value of the system and its data.

Access Control – determining who is authorized to access which resource

Encryption and Authentication – determining what data will be encrypted and whether or no authentication will be required for user access

Virus Control – controlling viral spread
10. What is meant by system value? Explain how various systems can have a different value to the organization.
System value is an assessment of the costs to the organization that might be incurred if the system were unavailable or if the data was compromised. These costs are not the costs associated with replacing hardware and/or software, but are the costs associated with loss of business; potential lawsuits, decreased customer satisfaction, cost of rebuilding the organizational data structure, etc.
10. Explain the difference between a symmetric encryption algorithm and an asymmetric encryption algorithm
A symmetric encryption algorithm is one in which the same key is used to both encrypt the data and decrypt the data. An asymmetric encryption algorithm is one in which separate key are established; one to encrypt the data, and another to decrypt the data.
10. What is meant by authentication? What is its role in securing transactions?
The term authentication can be defined as ‘proving identity’. Typically there are three factors that aid in proving identity; what you have (key, access card), what you know (password, PIN), and what you are (retina scan, fingerprint). Requiring all three factors is the strongest type of authentication. Access control defines who has access to what; authentication provides the identity of the ‘who’.
10. Describe the usefulness of the Internet’s public key infrastructure (PKI).
PKI is useful in that it can act as an uninterested, third party in the encryption and authentication process. PKI uses certificates that each organization or individual applies for with a certificate authority (CA). The CA is responsible for authenticating the individual or organization before issuing the digital certificates, and then holds those certificates in trust. The organization or individual then uses those digital certificates to authenticate identity.
10. Describe the types of cultural and political requirements and how they influence the architecture design.
Cultural and Political requirements include:
Multilingual – Does the environment require the system to operate in more than one language?

Customization – Are there features that can be customized according to different national cultures?

Making Unstated Norms Explicit – Are there assumptions that may be ambiguous in different national cultures? If so, they need to be explicitly stated.

Legal – Are there national and/or international legal issues that need to be addressed?
10. Explain the difference between concurrent multilingual systems and discrete multilingual systems.
A concurrent multilingual system is one in which several languages are available simultaneously. Users can choose to use many languages at any time. A discrete multilingual system is one in which one of many languages is chosen at installation. Reinstallation is required for the system to operate in a different language.
10. Why is it useful to define the nonfunctional requirements in more detail even if the technical environment requirements dictate the specific architecture?
If the technical environment requirements dictate the architecture design, it is still important to define the other nonfunctional requirements in detail. This is because these requirements will become important in later stages of the design and implementation phases of the project.
10. What is the purpose of the hardware and software specification?
The hardware and software specification is a document that details the requirements of the new system in terms of operating system, hardware, software, and network devices.
10. What do you think are three common mistakes that novice analysts make in architecture design and hardware and software specification?
Architecture design is a difficult process, so it is easy for a novice analyst to make mistakes. Some likely mistakes include:
not considering the future of the system; selecting a design based only on its current needs.
Not considering all aspects of system security that need to be factored into the architecture design.
Failing to include cultural, political, and legal requirements that may be important for the system
The hardware and software specification is also subject to some mistakes. For example:
Omitting a key piece of software needed in the overall system
Omitting some associated software issues (and costs) such as training, maintenance, and licensing agreements.
Providing incomplete hardware specifications
10. Are some nonfunctional requirements more important than others in influencing the architecture design and hardware and software specification?
The technical environment requirements have the most influence on the architecture design and the hardware/software specification. These requirements follow directly from the business requirements for the system and generally dominate all other considerations.
10. What do you think are the more important security issues for a system?
It is difficult to rank security issues since all are important. In today’s environment, however, there are some issues that must be addressed. For example, protection from external access is increasingly important in our networked world. Since more and more business transactions are conducted over networked systems and the Internet, encryption and authentication controls are essential. Viruses are the most common security problems, so systems need to prevent the spread of viruses.
1. Explain three important user interface design principles.
The authors list six principles of user interface design:
Layout - the interface should be a series of areas on the screen that are used consistently for different purposes.
Content Awareness - the user is always aware of where they are in the system and what information is being displayed.
Aesthetics - interfaces should look inviting and should be easy to use.
User Experience - experiences users prefer ease of use, while inexperienced users prefer ease of learning.
Consistency - users can predict what will happen before a function is performed.
Minimize Effort - interface should be simple to use.
2. What are three fundamental parts of most user interfaces?
Navigation mechanism - the way the user gives instructions to the system and tells it what to do.
Input mechanism - the way in which the system captures information
Output mechanism - the way the system provides information to the user or to other systems.
2. Why is content awareness important?
Content awareness means that the interface makes the user aware of the information delivered through the interface with the least amount of user effort. This is important because if the user is constantly aware of where he is and what he is seeing, he will find the system much easier to use and his satisfaction will be high.
2. What is white space and why is it important?
White space refers to areas on an interface that are intentionally left blank. The more white space on an interface, the less dense the information content. Designers need to try and strike a balance between information content and white space. Some white space is necessary to help the users find things on the interface. Generally, more experienced users need less white space than novice users.
2. Under what circumstances should densities be low? High?
Low densities are preferred by infrequent or novice users of an interface. These users will be unfamiliar with the interface and will be helped by having a balance of information and white space on the interface. High densities can be acceptable to experienced users of the interface, because they are highly familiar with the information on the interface and do not need as much white space to help them find what they are looking for.
2. How can a system be designed to be used by both experienced and first time users?
Experienced users prefer systems that focus on ease of use, while novice users prefer systems that are easy to learn. These two goals are not necessarily mutually exclusive. Generally, systems should be set up so that the commonly used functions can be accessed quickly, pleasing the experienced users. To assist the novice users, guidance should be readily available, perhaps through the “show me” functions that demonstrate menus and buttons.
2. Why is consistency in design important? Why can too much consistency cause problems?
Consistency means that all parts of the same system work in the same way. This enables the users to predict what will happen because a function in one part of the system works the same way in other parts of the system. Users will be confident as they work with different parts of the system if they can predict the behavior of functions throughout the system. The problem with too much consistency is that sometimes the users don’t differentiate forms or reports that look very similar to each other, and inadvertently use the wrong one. So, in these cases, there should be enough unique characteristics to distinguish each form and report from the others.
2. How can different parts of the interface be consistent?
The navigation controls can be consistent, using the same icon or command to trigger an action throughout the system. Terminology can be consistent throughout the interface. The content portion of the screen that contains forms and reports should also present consistently designed reports and forms. Messages and information in the status area should be specified consistently throughout the system.
2. Describe the basic process of user interface design.
First, identify ‘use cases’ that describe commonly used patterns of actions that users will perform. These use cases will be valuable in ensuring that the interface permits the users to enact these use cases quickly and smoothly. Next, develop the interface structure diagram, defining the basic structure of the interface (screens, forms, and reports) and how the interface components connect. Third, develop interface standards, the basic design elements that will be used throughout the interface. Fourth, create prototypes of the various interface components (navigation controls, input screens, output screens, forms, and reports). Finally, evaluate the prototypes and make changes as needed.
2. What are use case scenarios and why are they important?
Use cases describe commonly used patterns of actions that users will perform. Use cases describe how users will interact with the system. Use cases are developed for the most common ways of working through the system. These use cases will be valuable in ensuring that the interface permits the users to enact these use cases quickly and smoothly.
2. What is an interface structure diagram (ISD) and why is it used?
An interface structure diagram shows all the screens, forms, and reports in the system, how they are related, and how the user moves from one to another. The diagram helps depict the basic components of the interface and how they work together to provide users the needed functionality. The structure of the interface depicted in the ISD can be examined using the use cases to see how well the use cases can be performed. This is an important early step in developing simple paths through the most common activities performed in the system.
2. Why are interface standards important?
Interface standards help define the basic, common design elements in the system. These standards help ensure consistency throughout the system.
2. Explain the purpose and contents of interface metaphors, interface objects, and interface actions, interface icons, and interface templates.
The interface metaphor provides a concept from the real world that helps the user understand the system and how it works. If the user understands the metaphor being used, he will probably be able to predict where to find things and how things will work even without actually using the system.
Interface objects are the fundamental building blocks of the system. Object names should be based on the most understandable terms.
Interface actions specify the navigation and command language style and the grammar of the system. Action terminology is also defined.
Interface icons are pictures that are used to represent objects and actions in the system, often shortcuts, that are available throughout the system.
The interface template defines the general appearance of all screens in the information system and all forms and reports that are used. The template consolidates all the other major interface design elements - metaphors, objects, actions, and icons.
2. Why do we prototype the user interface design?
Prototyping helps the users and programmers understand how the system will perform. Prototypes can be very useful in helping the users conceptualize how they will actually work with the system, and prototypes can help identify problems or misconceptions in the interface before it is actually implemented.
2. Compare and contrast the three types of interface design prototypes.
Storyboards are really just pictures or drawings of the interface and how the system flows from one interface to another. HTML prototypes are web pages that show the fundamental parts of the system. Users can interact with the system by clicking buttons and entering data, moving from page to page to simulate navigating through the system. Language prototypes create models of the interface in the actual language that will be used to implement the system. These will show the user exactly what the interface will look like, which is not possible with the other two methods.
2. Why is it important to perform an interface evaluation before the system is built?
An interface assessment is important before the system is built because we need to do as much as we can to improve the interface design prior to implementation. It is wasteful to wait until after implementation to evaluate the interface because it will be expensive to go back and modify the interface at that point.
2. Compare and contrast the four types of interface evaluation.
These techniques vary in terms of the degree of formality and the amount of user involvement. Heuristic evaluation involves assessing the interface based on a checklist of design principles. This assessment is usually performed by team members, who independently assess the interface and then compare their assessments. Weaknesses that are common in all the evaluations then point to areas that need modification. Users are not involved in this process. In a walkthrough evaluation, the users see the interface at a meeting presentation, and they are “walked-through” the parts of the interface. The interactive evaluation can be used when the prototype as been created as an HTML or language prototype. The users can actually interact with the interface as if they were using the system, and can give direct comments and feedback based on their experience. Problems or areas of confusion can be noted and corrected by the team. Formal usability testing has the users interacting with the interface without guidance from the project team. Every move made by the user is recorded and then analyzed later in order to improve the interface.
2. Under what conditions is heuristic evaluation justified?
Heuristic evaluation is probably justified in situations where the interface is well understood. When there is little uncertainly about how the interface should function, then it is probably sufficient to just assess it internally by comparison to a checklist of design principles. It would be dangerous to use this technique (which does not involve users) if there was uncertainty about what should appear in the interface or how it should function.
2. What type of interface evaluation did you perform in the Your Turn Box 9.1?
This is an example of heuristic evaluation, since the interface is being compared to a set of design principles.
Describe three basic principles of navigation design
Prevent Mistakes - this principle is directed toward developing the navigation controls to help the user avoid making mistakes.
Simplify Recovery from Mistakes - this principle recognizes that mistakes will happen, and so is directed toward making it as easy as possible to recover from those mistakes.
Use Consistent Grammar Order - This principle states that the order of commands should be consistent throughout the system.
2. How can you prevent mistakes?
While it is impossible to completely prevent mistakes, there are some things that will help the user avoid mistakes. First, make sure all commands and actions are clearly labeled. Limit the number of choices that are presented to the user at one time to help reduce confusion. Never display a command or action that is inappropriate for the situation. Also, give users a chance to confirm potentially destructive actions (such as deleting a record).
2. Explain the differences between object-action order and action-object order.
Commands given to the system usually follow a sequence of ‘specify the object, then specify the action’ or ‘specify the action, then specify the object.’ This is referred to as the grammar order of the commands. The designers should select the grammar order desired for the system and use it consistently.
2. Describe four types of navigation controls.
Languages - most often this navigation control refers to a command language, or a set of specials instructions that are used to instruct the system. In order to perform a task the user must know the correct command to give the system. Natural language interfaces free the user to give instructions in everyday terminology, but these types of systems are not common.
Menus - this navigation control presents the user with a list of options that can be performed as needed. Menu structures present the user with an organized set of commands to apply
Direct manipulation - this type of navigation control involves working directly with interface objects, such as dragging a file from one location to another.
Voice recognition - this navigation control involves giving instructions to the computer verbally. Some of the systems only recognize certain commands, while others recognize more natural speech. Progress is being made in this technology, but it is not yet common in systems.
2. Why are menus the most commonly used navigation control?
Menus are the most commonly used navigation control because they are much easier to learn than a language, and they are very simple to work with, enhancing the ease of use of the system.
2. Compare and contrast four types of menus.
The menu bar is usually the main menu of the system. It consists of a list of commands across the top of the screen that is always displayed. The commands on the menu bar represent the main objects and/or actions of the system, and lead to other menus. Drop-down menus appear immediately below another menu. A series of commands are listed, and these lead to direct actions or other menus. The drop-down menu disappears after one use. Pop-up menus appear to ‘float’ on the screen, usually triggered by a right-click on the mouse. A series of commands that pertain to the work the users was doing are listed. Pop-up menus are often used to present an experienced user with shortcuts to common commands. Pop-up menus disappear after one use. A tab menu is a multi-page menu, each page represented by a tab on the menu. Each tab represents a set of related actions or settings. The tab menu will remain on the screen until the user closes it.
2. Under what circumstances would you use a drop-down menu versus a tab menu?
A drop-down menu is commonly used as the second-level menu, triggered when one of the main menu options is selected. The drop-down menu lists another set of more specific commands that will either lead directly to an action or to another, more detailed menu. The tab menu is chosen whenever the user needs to make multiple choices (such as specifying several settings) or perform several related commands. The tab menu stays open until the user has completed making the choices and closes the menu. Use a tab menu whenever the user needs to do several related tasks at one time.
2. Describe five types of messages.
Error messages are displayed when the user has done something that is not permitted or cannot be carried out. An error message should inform the user why the attempted action is illegal or incorrect. Confirmation messages are displayed whenever the user has entered a command that has major significance and may be destructive (such as shutting down the system or deleting a record.). The confirmation message is used to force the user to verify that the action is the correct one. Acknowledgment messages signify that an action or task is complete. These messages can be used to ensure that the user knows what the system is doing, but they can become very annoying if encountered frequently. Delay messages indicate that the system is performing a task and that the user should wait until the task is completed. These messages keep the user informed about the system status, and can be very helpful, especially to novice users who may not appreciate the time certain tasks require. Help messages provide the user with additional information, and are an important means of giving users instructions and guidance when needed. Even experienced users will need access to help for rarely used system functions.
2. What are the key factors in designing an error message?
An error message should first identify the error. Some additional explanation of the problem is also usually provided. Then, the message should inform the user how to correct the problem. Finally, a button for user response is usually included that clears the message off the screen and enables the user to take the corrective action.
2. What is context-sensitive help? Does your word processor have context-sensitive help?
Context-sensitive help means that the help system recognizes what the user was doing when the help was requested, and help specific for that task is displayed. MS Word does have context-sensitive help.
2. Explain three principles in the design of inputs.
The most significant input design principle is to capture data as close to its point of origin as possible. By electronically collecting the data at its point of origin, time delays are minimized and errors can be reduced. A second important input design principle is to minimize user keystrokes. Use source data automation techniques whenever possible. Only ask the user to enter new data into the system; use reference tables and lookups whenever possible. When the inputs have known values, use default values check boxes, radio buttons, or drop-down lists. Finally, use the appropriate mode of processing (online versus batch) for the application. Batch applications are generally simpler than online applications, but have the disadvantage of not updating the databases or files immediately. Online applications are more complex than batch, but are used when it is necessary to have immediate update of the databases or files.
2. Compare and contrast batch processing and on-line processing. Describe one application that would use batch processing and one that would use on-line processing.
Online applications process the entire transaction, including updates to the files or databases, immediately when the transaction occurs. Batch applications, on the other hand, accumulate transactions over some time period, then process all transactions from the batch completely and post them to the files and databases at one time. An airline reservation system is a classic example of an online system, since the flight reservation is immediately reflected in the system database. Payroll systems are commonly batch applications, with payroll transactions accumulated over the pay period and processed as a batch at one time.
2. Why is capturing data at the source important?
Capturing data at the source has three advantages. First, it can reduce costs because work does not have to be duplicated. Second, it reduces delays in processing. Third, it reduces the likelihood of error.
2. Describe four devices that can be used for source data automation.
Bar code readers scan bar codes found on products to enter data directly into the system. Optical character readers can read and enter printed numbers and text. Magnetic stripe readers enter information from a stripe of magnetic material. Smart cards contain microprocessors, memory chips, and batteries to maintain information which then can be read by smart card readers.
2. Describe five types of inputs.
Text boxes are areas defined on the screen where the user enters text. The text can be a single line or a scrollable region. Text boxes are used whenever the user needs to enter free-form data. Number boxes are used when the user must enter numeric data. Check boxes are used whenever the user can choose one or more items from a known list. Radio buttons are used when the user needs to select one choice from a known set of options. List boxes present the user with a list of items from which one is selected.
2. Compare and contrast check boxes and radio buttons. When would you use one versus the other?
Physically, check boxes are usually represented as small squares, and radio buttons are small circles. Operationally, they are used very differently. Check boxes are used when the user can select one or more choices from a list of options. Radio buttons are mutually exclusive. Only one button can be chosen at a time. Selecting one radio button removes the selection from any button previously selected. Use radio buttons when you want to force the user to make one choice. Use check boxes when the user can select multiple items from the list.
2. Compare and contrast on-screen list boxes and drop-down list boxes. When would you use one versus the other?
On-screen list boxes present the user with a list of choices that are always displayed. A drop-down list box displays the list of choices as needed. Generally, there is not enough screen space to use on-screen list boxes unless the list is quite short. Therefore, the drop-down list box is used to display longer lists temporarily and then disappear from the screen after the choice is made. The amount of available screen space dictates which type of list box will be used.
2. Why is input validation important?
Input is validated in order to try and reduce the amount of erroneous data that is entered into the system. Clearly, the quality of the information that comes out of a system is dependent on the quality of the input data. Therefore, we must do as much as is reasonable to assure high quality data is input in the system. The various techniques of data validation help us do that.
2. Describe five types of input validation methods.
Completeness checks are performed to verify that all required data items have been entered. In some cases, data is optional in a transaction. However, when specific data is required, a completeness check will ensure that something is entered in every required field. Format checks are used when a particular data format is expected in the field and can be verified.. Range checks are commonly used when a numeric item falls within some expected range of values. A check digit check is used to validate numeric code fields. In these situations, an algorithm establishes a check digit for each occurrence of the numeric code. Whenever a numeric code is re-entered into the system its check digit is recalculated. If the calculated check digit does not match the expected check digit, there has probably been a data entry error in the code, and it needs to be re-entered. Consistency checks are performed when there is a relationship between field values that is known and can be checked. Database checks are used to compare an entry against a value stored in a file or database to ensure it is a valid value.
2. Explain three principles in the design of outputs.
First, it is important to understand how the report will be used. It is not enough just to know what data should appear on the report. The report designer needs to know how the user will utilize the report; what sequence or sorting arrangement is needed, what subtotals are needed, when the information is needed, etc. Second, the report design needs to manage the information load presented in the report. A report that dumps in ‘everything but the kitchen sink’ will probably not be useful to the recipient. Have the user specify the question(s) he wants to answer by using the report, and then provide just the information needed to answer those questions. Third, avoid presenting the information in a biased way. Bias can be unintentional, so carefully assess choices made on information sorting or structuring graphical outputs.
2. Describe five types of outputs.
Detail reports are reports that only list detailed information about a few items. Detail reports are designed for situations where complete information is sought. Summary reports list summarized information about a large number of items. Exception reports list information only about items that meet some predefined criterion (e.g., accounts over 30 days past due). Turnaround documents provide information about some system output (e.g., a bill), but also include a section that will re-enter the system as an input (e.g., a payment coupon). A graph is a depiction of numerical relationships using a two or three dimensional chart.
2. When would you use electronic reports rather than paper reports, and vice versa?
Paper reports have the advantage of being permanent, easy to use, and portable (if they are small). Paper reports do not require the presence of a computer in order to be used. A report should be printed on paper if its content is fairly static and if it needs to be taken from place to place to be used, and computers are not readily available. Electronic reports store the reports on servers so that they can be readily viewed from any computer. Electronic reports are so inexpensive that often many variations of the reports are created. Users can refer to the reports online or print them locally as needed. It is generally advantageous and less costly for users to print reports locally as needed rather than printing all reports centrally.
2. What do you think are three common mistakes that novice analysts make in interface design?
Failing to focus on the most common paths through the interface
Making the interface too crowded
Failing to think about whether the primary users of the system are casual, occasional users or frequent, experienced users
Being inconsistent from one place in the interface to another in terms of standard design features and terminology
2. How would you improve the form in Figure 9-4?
A user who has not seen this form before and does not know how it is used will find it difficult to make suggestions for improvement. The form is very dense, however, and so it might be useful to segment it into two pages that are logical subsets of the form content. Each page could be much less dense and therefore easier to use.
1. What is the purpose of creating a logical process model and then a physical process model?
During the analysis phase, the logical process models are used to depict the processes and data flows that are needed to support the functional requirements of the new system. However, logical process models do not include implementation details, or show how the final system will work. Physical process models include that information, in terms of technology, format of information moving through processes, and the human interaction that is involved.
1. What information is found on the physical DFD that is not included on the logical DFD?
The physical DFD includes all elements on the logical DFD plus: implementation references for data stores (e.g. type of database), processes (e.g. programs) and data flows (e.g. paper reports, input screens, etc), human-machine boundaries, any additional system-related data stores, processes, and data flows.
1. What are some of the system-related data elements and data stores that may be needed on the physical DFD that were not a part of the logical DFD?
Student answers will vary, however additions are typically related to technical limitations or to the need for audits, controls, or exception handling.
1. What is a human-machine boundary?
A human-machine boundary line is drawn in each instance where a human will interact with the system. The boundary can represent a web page, an application. Designing placement of human-machine boundaries includes addressing issues of cost, efficiency, and integrity.
1. Why is using a top-down modular approach useful in program design?
With the top-down approach, the program design is specified broadly, or at a high-level, and then more details are added that show the components of the program and how they work together. Developing program designs with this approach helps to ensure that efficient programs are written, that the programs work together effectively in the system, and that the system performs as it is supposed to perform.
1. Describe the primary deliverable produced during program design. What does it include and how is it used?
At the end of program design, the project team compiles the program design document, including all of the structure charts and program specifications that will be used to implement the system. The program design is used by programmers to write code.
1. What is the purpose of the structure chart in program design?
The structure chart shows all of the components of code that need to be included in a program, and shows the arrangement of those components as sequence, selection, or iteration control structures.
1. Where does the analyst find the information needed to create a structure chart?
One recommendation for creating a structure chart is to begin with the processes depicted on the logical DFD. Each process on a DFD tends to represent one module on the structure chart, and if leveled DFDs are used, then each DFD level tends to correspond to a different level of the structure chart hierarchy.
1. Distinguish between a control module, subordinate module, and library module on a structure chart. Can a particular module be all three? Why or why not?
A control module contains the logic for performing other modules that are subordinate to it. Subordinate modules are ‘underneath’ a higher-level module in the hierarchy. Library modules are modules that perform tasks in several places in the system; they are reused. It is possible for a control module to be a subordinate module to another higher-level module, however, it is not as likely that the module will also be a library module. This is because library modules generally perform a frequently repeated task (such as retrieving data) rather than controlling more subordinate modules (requiring fairly specific logic.).
1. What does a data couple depict on a structure chart? A control couple?
Data couples represent the movement of data elements or structures between modules. Control couples represent parameters, messages, or status flags that are moved between modules.
1. It is preferable for a control couple to flow in one particular direction on the structure chart. Which direction is preferred and why?
It is highly preferable for a control couple to be passed from a subordinate module to a control module. This implies that the subordinate module has found a condition that is passed to the control module to use in determining how the program will operate. If the control module passes a control couple to a subordinate, it implies that the subordinate module has control over the higher-level module.
1. What is the difference between a transaction structure and a transform structure? Can a module be a part of both types of structures? Why or why not?
A transaction structure contains a control module that calls subordinate modules, each of which handles a particular transaction. Usually, the subordinate modules are mutually exclusive in this structure, meaning that one and only one will be called by the control module, and then processing will revert back to the control module. The transform structure has a control module that calls several of its subordinate modules in sequence until a task or response to an event is complete. These modules work together to complete a process. It is unlikely that a module will be a part of both types of structures. Transaction structures are usually found at the upper end of the structure chart, and transform modules are usually found at the lower end of the structure chart.
1. What is meant by the characteristic of module cohesion? What is its role in structure chart quality?
Module cohesion refers to how well the lines of code within each structure relate to each other. Ideally, each module should perform one task only which results in smaller, less complex modules that are easier to perfect and maintain, thus contributing to the overall quality of the structure chart.
1. List the seven types of cohesion. Why do the various types of cohesion range from good to bad? Give an example of “good” cohesion and one example of “bad” cohesion.
Functional cohesion is the “best” situation, in which a module performs one and only one problem-related task. Sequential cohesion involves a module performing more than one task, and the output from one task is used by the next task in the module. In communicational cohesion, two or more tasks are combined in a module because both tasks require the same input elements. In procedural cohesion, a module incorporates several tasks that are unrelated. In temporal cohesion, several non-related tasks are combined in a module because they are performed at the same time. Logical cohesion combines several different tasks, and the one to be performed will be chosen by the control module and communicated through a control message passed to the subordinate module. Coincidental cohesion incorporates a number of non-related tasks that have no apparent relationship. This kind of cohesion is the poorest.
1. What is meant by the characteristic of module coupling? What is its role in structure chart quality?
Module coupling refers to how closely modules are interrelated. Ideally, modules are loosely coupled, which means that the design is characterized by a minimal number of interactions (e.g. data passing) between modules. Modules that are loosely coupled can be considered to be fairly independent and the interactions between them relatively easier to track and maintain, thus contributing to the overall quality of the structure chart.
1. What are the five types of coupling? Give one example of “good” coupling and one example of “bad” coupling.
Data coupling refers to the situation in which modules pass fields of data or messages. All data that is passed is used by the receiving module. Stamp coupling involves modules passing entire record structures. In this case, an entire record will be passed even if only a few fields are needed from the record. Control coupling refers to situations in which a module passes control information to a subordinate module. The subordinate modules use the control information to determine the correct processing to perform. Common coupling involves many modules referring to (and changing) the same global data area. This is hard to detect on a structure chart. Content coupling involves one module referring to the inside of another module. Data coupling is considered “good” coupling because modules pass parameters or specific pieces of data to each other. This is good because the interaction between the modules is very limited. Content coupling is considered “bad” coupling, because one module actually refers to and makes changes to information inside another modules. This is bad because the modules will be highly interactive with each other, and will be much more difficult to maintain in the future.
1. What is meant by the characteristics of fan-in and fan-out? What are their roles in structure chart quality?
Both fan-in and fan-out refer to the number of subordinate modules a control module communicates with. In a fan-in structure, multiple control modules communicate with a small number of subordinate modules, which indicates that each subordinate module performs a task for multiple control modules (reuse). In a fan-out structure, one control module communicates with multiple subordinate modules. As the number of subordinate module increases, the level of complexity of communication increases. Ideally, the fan-out structure is more efficient and effective than the fan-out module. When assessing structure chart quality, attention should be paid to a fan-out module to determine whether or not it can be redesigned. The rule of thumb is that one control module should communicate with a maximum of seven subordinate modules.
1. List and discuss three ways to ensure the overall quality of a structure chart.
Following structure chart design guidelines produces programs that are modular, reusable, and easy to implement. First, modules should be built with high cohesion. This means that the lines of code within each module relate to each other, and the module performs one and only one task. This makes the modules easy to build, efficient, and easy to understand. Second, modules should be loosely coupled. This means that the modules are independent from each other, so that code changes in one module have minimal impact on other modules. Third, design the structure to create high fan-in and avoid high fan-out. High fan-in implies that a module is called from several places within the structure, meaning that it is reused. Avoiding high fan-out means that we want to minimize the number of subordinate modules associated with a control module. Generally, a control module should have no more than seven subordinate modules.
1. Describe the purpose of program specification.
Program specifications include explicit instructions on how to program pieces of program code. During the preparation of the program specifications, the analyst may discover design problems in the structure chart, or may find better ways to arrange the modules.
What is the difference between structured programming and event-driven programming?
Structured programming involves writing programs and procedures that are executed in a strict order by the computer system, and users have no ability to deviate from that order. Event-driven programming involves developing programs and procedures that are triggered by an event (such as a mouse click). The program ‘waits’ for an event to occur, and then performs the needed tasks. The second section of the Program Specification lists the events associated with a program.
1. Is program design more or less important when using event-driven languages such as Visual Basic?
Program specifications may be used when programming in event-driven languages. There are other tools that may be useful in designing these circumstances, such as the state-transition diagram.
1. Describe the two steps to data storage design.
The first step is to select the appropriate format for the data storage. There are several different methods of storing data (files, relational data bases, multi-dimensional databases, object-oriented databases) and the analyst should select the one that will provide the best approach to storing the system data. Second, the data storage must be designed to optimize its processing efficiency, which involves considering how the data will be used, and making the appropriate design decisions.
1. How are a file and a database different from each other?
Files are essentially an electronic list of information that is formatted for a particular transaction. Any programs that are written must be developed to work with the file exactly as it is laid out. If there is a need to combine data in a new way, a new file must be created (usually by extracting data from other files) and a program written to work specifically with that new file. Databases, on the other hand, are made up of a collection of data sets that are related to each other in some way. Database management system software creates these data groupings. The DBMS provides access to the data and can usually provide access to any desired subset of data. It is not necessary to write new programs to build a new file in order to retrieve data from the database in a new way.
1. What is the difference between an end-user database and an enterprise database? Provide an example of each one.
An end-user database is one that is designed to run on a PC and is used to create personal database applications. An end-user in sales might develop a Microsoft Access database, for example, to keep track of current and prospective client contacts. An enterprise database is one that is capable of handling huge volumes of information for the entire organization. Applications that serve the entire enterprise can be built upon these enterprise databases. These databases are fast, high capacity, but also complex. Oracle is a vendor of enterprise database management systems.
Name five types of files and describe the primary purpose of each type
Master files store the business’s or application’s core data. The data in a master file is considered fairly permanent, does not change much, and is usually retained for long periods. Look-up files contain reference information that is used primarily during validation processing. A list of valid code values for a field might be referred to during data entry to ensure that a valid code was entered. Transaction files contain information that will be used to update a master file. These files are usually temporary in nature; they are used to collect transactions, the transactions update the master file, and then the transaction files are archived. Audit files are used to help trace changes made to files. An image of a record before and after a change may be written to an audit file so that the change is documented. History files serve as archives for older information that is rarely used. These files can be accessed if necessary, but are usually stored on tape to remove the little-used data from the active data storage areas.
1. Name two types of legacy databases and the main problems associated with each type.
Hierarchical databases use hierarchies, or inverted trees, to represent relationships. The main problem with this database model is that it cannot be used efficiently to represent non-hierarchical associations. Network databases avoid this problem, but require a considerable amount of programming effort. Programs must be written to follow the database structure, and if the database structure changes, then complex programming must be done to change the application programs as well.
1. What is the most popular kind of database today? Provide three examples of products that are based on this database technology.
Relational databases are most popular today due to their ease of use and conceptual simplicity. Examples of relational DBMSs on the market include MS Access, Oracle, DB2, Sybase, Informix, and MS SQL Server.
1. What is referential integrity and how is it implemented in a relational database?
Referential integrity refers to the need to make sure that the values linking the table together through the primary and foreign keys are valid and correctly synchronized. For example, if a customer is placing an order, we need to have information on the customer in the customer table. The RDBMS will check to see if there is a record for that customer in the Customer table before it will let an order be entered. Checking for known required relationships helps assure referential integrity.
1. What is the biggest strength of the object database? Describe two of its weaknesses.
The biggest strength of object databases is the reusability of objects. This accelerates system development and helps keep costs manageable. Object databases are also very suitable to store complex data (e.g., graphics, video, and sound). Two weaknesses of object databases and the lack of experienced developers and the steep learning curve associated with OODBMSs.
1. How does the multidimensional database store data?
Multidimensional databases store data using several dimensions. Data may be aggregated and/or detailed, depending upon the access needs of the users.
1. What are the two most important factors in determining the type of data storage format that should be adopted for a system? Why are these factors so important?
First, evaluate the type of data that will be stored. Relational databases are the standard for simple data such as numbers, text, and dates. If the data is more complex (video, images, or audio), then object databases may be required. If the data needs to be aggregated, then multidimensional databases are recommended. The second factor is the type of system being developed. Transaction processing systems require rapid update and retrieval capability, and will best be constructed using files, relational databases, or object databases. Decision support types of applications require rapid access to data in ad hoc ways. These types of systems are best implemented using relational or multidimensional databases. These two factors are very important because you must select a data storage format that is suitable for the data the system will include and the uses planned for that data.
1. Why should you consider the storage formats that already exist in an organization when deciding upon a storage format for a new system?
This factor is important because the project team needs to be aware of the existing base of technical skills that are available to work with the data storage format. If a data storage format is chosen that is new to the organization, then the team must allocate training and learning time into the project schedule.
1. What are the differences between the logical and physical ERDs?
The logical ERD represents the data required by the application, and presents a ‘business view’ of the data without including implementation details. The physical ERD includes all elements of a logical ERD, but includes implementation details which aid in describing characteristics of the system and presenting a ‘systems view’ of the new system.
1. Describe the metadata associated with the physical ERD.
Metadata included in the physical ERD includes information regarding attributes such as data type, field size, format, default values, primary keys, and foreign keys.
1. Describe the purpose of the primary and foreign keys.
A primary key serves as the unique identifier for each record to be stored in a table, and one is required for identified for each table. A foreign key is one in which an attribute in one table is the primary key in another. Identifying foreign keys is important in enforcing referential integrity.
1. Name three ways that null values in a database can be interpreted. Why is this problematic?
A null value in a field can indicate that there should not be a value in the field (i.e., blank is correct). It can also mean that an error was made, and a value that should have been entered was incorrectly omitted. It can also indicate that a value for the field has been deleted, which may or may not be correct. The difficulty in really knowing why the null exists is the major problem with nulls.
1. What are the two dimensions in which to optimize a relational database?
The two dimensions in which to optimize a relational database are for storage efficiency and for speed of access.
1. What is the purpose of normalization?
The purpose of normalization is to optimize the data storage design for storage efficiency. Normalization helps ensure that data redundancy and null values are kept to a minimum.
Describe three situations that can be good candidates for denormalization
Denormalization is performed to speed up data access. Redundancy is added back into tables in order to reduce the number of joins that are required to produce the desired information. In a normalized Order table, the customer name will not be included; however it may be added back in to the Order table to improve processing speed. This represents a situation in which some parent entity attributes are included in the child entity. Similarly, a lookup table of zip codes and states may be set up in the normalized data model, but could be added back in to the physical model design. Another situation is where a table of product codes lists the description and price. These may also be added back into the physical model to improve application performance. Lookup tables are common candidates for denormalization. Finally, 1:1 relationships may be good candidates for denormalization, since the information may be accessed together frequently.
Describe several techniques that can improve performance of a database.
Denormalization adds selected fields back to tables in a data model. This adds a little redundancy, but improves the data access speed. Clustering involves physically placing records together so that like records are stored close to each other. Indexing creates small, quickly searchable tables that contain values from the table and indicate where in the table those values can be found. Finally, proper estimation of the data set size is important to assure that adequate hardware is obtained for the system.
What is the difference between interfile and intrafile clustering? Why are they used?
nterfile clustering physically orders records within a table in some meaningful way, such as by primary key value. Interfile clustering identifies records for separate tables that are typically retrieved together and physically stores them together.
What is an index, and how can it improve the performance of a system?
An index is a small, quickly searchable table that contains values from the table and indicates where in the table those values can be found. System performance is improved with an index because it is no longer necessary to search the entire table for the desired values. The small index table can be quickly searched to reveal exactly where the desired values are stored.
Describe what should be considered when estimating the size of a database.
The size of the database will be based on the amount of raw data expected, the growth rate of raw data that is expected, and the overhead requirements of the DBMS.
Why is it important to understand the initial and projected size of a database during the design phase?
The design team needs to be sure that the hardware that is specified for the system is adequate to support the size of the database. If inadequate hardware is chosen, the performance of the system will be poor regardless of the ‘tuning’ techniques that are applied.
What are the key issues in deciding between using perfectly normalized databases and denormalized databases?
A perfectly normalized database is optimized for storage efficiency, minimizing wasted storage space. This data storage design is not as useful when data must be frequently queried, since the data is spread across many tables that must be joined in processing the query. Access speed will degrade in these circumstances. Therefore, if the data is going to be accessed frequently, it may be valuable to denormalize the design to reduce the number of joins that must be processed in a query.