1. Ensure that the product has achieved the promised or announced functions, and all the functions accessible by users are clearly stated in writing-in a sense, it is the same idea as iso900 1.
The lack of clear written documents is a short-term behavior of the manufacturer and an irresponsible performance. The so-called short-term behavior refers to the lack of clear written documents, which is not conducive to the smooth delivery of products, easy to conflict with users, and affect the reputation of manufacturers and future cooperation with users; At the same time, it is not conducive to the later maintenance of products, and it also makes manufacturers spend too much user training and technical support costs. From the long-term interests, this is not cost-effective. There are few software products that leaders have been exposed to, such as big products and thin documents.
Of course, for the rapid prototyping (rad) development projects, the writing and maintenance of written documents are the most important, the most difficult and the most easily overlooked.
Finally, the imperfection or even inaccuracy of written documents is also the biggest and most troublesome problem encountered in the testing work. Its direct consequences are low test efficiency, unclear test objectives and insufficient test scope, which leads to the failure to give full play to the role of the final test and unsatisfactory test results.
2. Ensure that the products meet the requirements of performance and efficiency.
A product with low operating efficiency (low performance), unfriendly user interface and inconvenient user operation (low efficiency) cannot be said to be a competitive product.
What users are most concerned about is not how advanced and powerful your technology is, but how much benefit he can get from these technologies and functions. In other words, the user cares about how much he can get out of it, not how much you put in.
3. Ensure that the product is robust and can adapt to the user environment.
Robustness, that is, stability, is the basic requirement of product quality, especially for business-critical or time-critical working environments.
In addition, it is impossible to assume the user's environment (except for some projects). For example, many newspaper users have low configurations and use them at the same time with some third-party products.
The principle of testing-good enough
For relatively complex products or systems, zero bug is an ideal, and good enough is our principle.
The principle of good enough is a principle to measure the input/output ratio: it is irresponsible to test insufficiently; Over-testing is a waste of resources and an irresponsible performance. The difficulty of our operation lies in: how to define what kind of testing is insufficient and what kind of testing is excessive. At present, the only available answer is to formulate the minimum passing standard and content of the exam, and then analyze the specific problems. The most obvious example is the product testing of China fit3.0.
Test Law-Konikin's Law and 80-20 Principle
1, koninkin's law.
In the production of software products, it is the concept of total quality management (tqm). The key factors of product quality are analysis, design and realization, and testing should be an auxiliary means of inspection. Other management, support and even cultural factors will also affect the quality of the final product. It should be said that testing is the necessary condition and the most direct and quickest means to improve product quality, but it is by no means the fundamental means. On the other hand, if all the weights to improve product quality are put on testing, it will be a terrible and long disaster.
2. The 80-20 principle of bugs.
Generally speaking, 80% of bugs can be found and avoided in the review and testing work in the analysis, design and implementation stages, and the remaining 80% of bugs can be found in the system testing. The last 5% bugs may only be exposed after users have widely used them for a long time. Because testing can only ensure that as many errors as possible can be found, but not all errors can be found.
Methods of software testing:
1, which is divided into:
(1) black box test: only care about the input and output results.
(2) White-box testing: study the source code and program structure.
2, according to whether to run the program is divided into:
(1) statictesting: refers to the process of statically checking possible errors in program codes, interfaces or documents without actually running the software under test.
Static tests include:
For code testing, it is mainly to test whether the code meets the corresponding standards and specifications.
For interface testing, it is mainly to test whether the actual interface of the software meets the description in the requirements.
For document testing, it is mainly to test whether the user manual and requirement description really meet the actual needs of users.
(5) dynamictesting refers to the process of actually running the tested program, inputting the corresponding test data, and checking whether the output results are consistent with the expected results.
3, according to the stage:
(1) unit testing refers to checking and verifying the smallest testable unit in the software.
Stud module refers to the module called by the simulated module under test, and driver module refers to the superior module of the simulated module under test. The driving module is used to receive the test data, start the tested module and output the results.
(2) integrationtesting is the next stage of unit testing, which means that the unit modules that have passed the test are assembled into a system or subsystem, and then tested, focusing on the interface departments of different modules.
The integration test is used to check whether each unit module can work together and run normally.
(3) systemtesting refers to testing the whole software system as a whole, including testing the function, performance and the software and hardware environment in which the software runs.
The main basis of system testing is the document system requirements specification.
(4) acceptancetesting refers to the testing that is mainly conducted by users or participated by quality assurance personnel such as testers in the later stage of system testing, and it is also the last working procedure for software to be formally handed over to users.
Acceptance test is divided into A test and beta test, in which A test refers to the internal test which is jointly participated by users, testers and developers, and beta test refers to the public test after the internal test, that is, it is completely handed over to the end user for testing.
4, black box test is divided into functional test and performance test:
1) function testing is an aspect of black-box testing, which checks whether the functions of the actual software meet the needs of users.
Include logic function testing.
Ui test ui = user interface
Usabilitytesting: it refers to checking the software system from the perspectives of rationality and convenience of software use, and finding out the inconvenient places in the software.
Compatibility test: including hardware compatibility test and software compatibility test.
2) Performance test
The performance of software mainly includes time performance and space performance.
Time performance: mainly refers to the response time of specific software transactions.
Spatial performance: mainly refers to the system resources consumed when the software runs.
Software performance testing is divided into:
General performance test: refers to the performance test that allows the tested system to run in normal software and hardware environment without any pressure.
Stability testing, also known as reliabilitytesting, refers to the continuous operation of the tested system to check the stability of the system during operation.
Loadtesting: let the system under test run continuously within the pressure limit it can bear to test the stability of the system.
Stresstesting: refers to increasing the pressure of the system under test until the system under test collapses, so as to test the maximum pressure that the system can bear. (validatethesystemorsoftwarecanallowedthebiggeststress。 )
5. Other test types:
Regressiontesting refers to testing a new version of the software and repeating the test cases of the previous version. (when anewbuildorreleaseisdeployed, reuse the executed test cases. )
Smoketesting refers to verifying whether the basic functions of the software are realized and testable before a new version is tested on a large scale. (validatethemajorfunctionisdeployedornotinoftwareofsystemwhanewbuildorreleaseisimplement。 )
Randomtesting means that all the input data in the test are randomly generated, and its purpose is to simulate the real operation of users and find out some marginal errors. (It means sorallthetestdataisrandom, tovalidatethesomeedgebugs. )
Software testing includes the following steps:
1. Make a test plan;
2. Design test cases,
3. Implementation of the test (first, build a test environment),
4. Manage the errors found in the test,
5. After the test (after the test, the errors found are corrected)
6. Make a test report (in this way, the test process is over, as are all kinds of tests (unit test, integration test, system test and verification test));