How to evaluate software quality?

xiaoxiao2021-03-06  22

Abstract: This paper starts from the relevant concept of software quality. Keywords: Software Quality Quality Assessment Index System 1 Software Quality Concept Software Quality is "Software Products" Software Products Get Subsequently or implicit requirements. According to the software quality national standard GB-T8566-2001G, software quality assessment usually begins with the analysis of the software quality framework. 1.1 Software Quality Framework Model As shown in Figure 1, the software quality framework is a three-layer structure model of "quality feature-quality sub-characteristic-metric factor". In this frame model, the upper layer is a management-oriented quality feature, each of which is used to describe a set of properties of software quality, representing one aspect of software quality. Software quality is not only characterized from the exterior of the software, but it must be determined from the features of its inside. The quality sub-characterization of the second layer is the refinement of the upper mass characteristics, and a particular sub-feature may correspond to several quality features. Software quality subsystems are communication channels for managers and technicians about software quality issues. The lowermost layer is a software quality metric (including various parameters) for measuring quality characteristics. The quantified metric can be measured or statistically determined, and the software quality subsystem feature value and the feature value are finally obtained. Figure 1 Software Quality Frame Model 1.2 Software Quality Characteristics According to Software Quality National Standard GB-T8566-2001G, software quality can be evaluated by the following characteristics: a. Features: A set of properties related to a set of functions and their designated nature, The function here is those that meet the clear or implicit needs. b. Reliability: A set of properties related to the ability to maintain its performance level under the predetermined periods of time and conditions. c. Easy features: a set of attributes related to the efforts and evaluation of the use of software required by a set of provisions or potential users. d. Efficiency characteristics: a set of properties associated with the relationship between the performance level of the software under the predetermined conditions and the amount of resources used. e. Maintainable features: a set of properties related to the efforts required to specify the modification. f. Portable Features: A set of properties related to the ability to transfer from an environment to another environment. Each of these quality features correspond to several sub-characteristics, respectively. 2 Evaluation indicator selection principles Select the appropriate indicator system and quantify the key to software testing and evaluation. The assessment indicators can be divided into two kinds of qualitative indicators and quantitative indicators. In theory, in order to be able to scientifically objectively reflect the quality characteristics of the software, the quantitative indicator should be selected as much as possible. But for most software, not all quality features can be described with quantitative indicators, so inevitably use a certain qualitative indicator. When selecting the evaluation indicator, you should grasp the following principles: a. Targeting is different from the general software system, reflecting the essential characteristics of the evaluation software, and the specific performance is functional and high reliability. b. Measuring, it is possible to quantify, and specific data can be obtained by mathematical calculations, platform testing, empirical statistics. c. Configuration is easy to be understood and accepted by all parties. d. Equity, that is, the selected indicator should overwrite the range involved in the analysis target. e. Objective, objective reflection software essential characteristics, can not vary from person to person. It should be noted that the selection evaluation indicator is not, the better, the key is the size of the role that the indicator acts in the evaluation. If there is too much indicator when evaluating, not only the complexity of results, but sometimes even affect the objectivity of the assessment. The determination of the indicator is generally the use of the top downward method, decomposed by layer, and needs to be comprehensively balanced during dynamic. 3 Software Quality Assessment Index System Usually, when we test and evaluate software, we mainly focus on several aspects of functional characteristics, reliable feature, ease of feature and efficiency characteristics. In the specific implementation of the evaluation activities, the development task book of the evaluation software should be used as the main basis, and the method of cut down by the top-down, and references the relevant national software quality standards. 3.1 Functional Index functionality is one of the most important quality characteristics of software, which can be refined into completeness and correctness.

The functional evaluation of the software is currently mainly used for qualitative evaluation methods. a. Completed completeness is software properties related to software features, complete. If the software actually completed is less than or does not meet the clear or implicit features specified by the Development Task Book, the functionality cannot be said to be complete. b. The correctness of the correctness is whether it is possible to get the right or matching result or effect. The correctness of the software is largely associated with the engineering model of the software module (directly affecting the accuracy of the auxiliary calculation and the advantages and disadvantages of the auxiliary decision plan) and the programming level of software compilation personnel. The evaluation of these two sub-characteristics is mainly the result of software functional testing, and the evaluation criteria is the level of functionality and predetermined functions in software actual operation. In the development task of the software, the features that the software should be completed, such as information management, providing auxiliary decision plan, auxiliary office and resource update. Then the software that is about to be tested, it should have these explicit or implicit features. At present, the functional test of the software is mainly designed for a number of typical test cases for each function, and the test case is run during the software test, and then the result will be compared to the known standard answer. Therefore, the comprehensiveness, typical and authority of the test case is the key to functional evaluation. 3.2 Reliability Indicators According to the relevant software test and evaluation requirements, reliability can be refined into maturity, stability, easy recovery. The reliability evaluation of the software mainly uses quantitative evaluation methods. That is, the appropriate reliability metric factor (reliability parameter) is selected, and then the reliability data is analyzed to obtain the specific value of the parameter, and finally the evaluation is performed. Software reliability metric (reliability parameters) can be obtained by refilling and decomposition of software reliability and refer to the development task. a. Software is in the probability of useful status when you need to perform a predetermined task or complete the predetermined function after running on either randomness. The usage is a comprehensive measure of the reliability of application software (ie, integrated operational environments, and various tasks and functions). b. The initial failure rate of the initial failure rate refers to the number of faults in the initial failure period (generally in the initial failure period within three months after software delivery to the user). Generally in units of a restricted number of faults per 100 hours. It can be used to evaluate the software quality and prediction of delivery used to be basically stable. The size of the initial failure rate depends on the software design level, checks the number of items, software scale, software debugging and other factors. c. Casual failure rate refers to the number of faults in the unit time in the casual fault period (generally after four months after software delivery to the user). It is generally in units of 1000 hours, it reflects the quality of the software in a steady state. d. The average failure time (MTTF) refers to the average statistical time of the software working properly before the failure. e. Average failure interval (MTBF) refers to the average statistical time of the software in successive two failures. When used in actual use, MTBF usually means that when N is large, the average statistical time between the system nth failure and the N 1 failure. MTBF is almost equal to MTTF for the normal time of failure to constant and system recovery. The MTBF of foreign general civil software is generally around 1000 hours. For software with high reliability, it is required between 1000 to 10,000 hours. f. Defect Density (FD) refers to the number of defects hidden in the source code. Usually in a unit of non-refractive code per thousand lines. In general, the specific value of the FD can be estimated according to the earlier version of the same software system. If there is no earlier version information, it can also be estimated in accordance with usual statistics. "Typical statistics show that in the development phase, there are 50 ~ 60 defects per thousand line source code, and the average number of defects per thousand line source code will be delivered." g. Average failure recovery time (MTTR) refers to the average statistics required to restore normal work after the software failure.

转载请注明原文地址:https://www.9cbs.com/read-39594.html

New Post(0)