Advanced Testing of Systems-of-Systems, Volume 2
Practical Aspects
Inbunden, Engelska, 2022
2 099 kr
Beställningsvara. Skickas inom 10-15 vardagar
Fri frakt för medlemmar vid köp för minst 249 kr.As a society today, we are so dependent on systems-of-systems that any malfunction has devastating consequences, both human and financial. Their technical design, functional complexity and numerous interfaces justify a significant investment in testing in order to limit anomalies and malfunctions.Based on more than 40 years of practice, this book goes beyond the simple testing of an application – already extensively covered by other authors – to focus on methodologies, techniques, continuous improvement processes, load estimates, metrics and reporting, which are illustrated by a case study. It also discusses several challenges for the near future.Pragmatic and clear, this book displays many examples and references that will help you improve the quality of your systemsof-systems efficiently and effectively and lead you to identify the impact of upstream decisions and their consequences.Advanced Testing of Systems-of-Systems 2 deals with the practical implementation and use of the techniques and methodologies proposed in the first volume.
Produktinformation
- Utgivningsdatum2022-12-20
- Mått161 x 240 x 21 mm
- Vikt708 g
- FormatInbunden
- SpråkEngelska
- Antal sidor304
- FörlagISTE Ltd and John Wiley & Sons Inc
- ISBN9781786307507
Tillhör följande kategorier
Bernard Homès is the founder of ISTQB, AST and the CFTL, senior member of the IEEE Standards Association and president of TESSCO sas. He has published several recognized books on software testing, standards and ISTQB Advanced v2007 and v2012 syllabi.
- Dedication and Acknowledgments xiiiPreface xvChapter 1 Test Project Management 11.1 General principles 11.1.1 Quality of requirements 21.1.2 Completeness of deliveries 31.1.3 Availability of test environments 31.1.4 Availability of test data 41.1.5 Compliance of deliveries and schedules 51.1.6 Coordinating and setting up environments 61.1.7 Validation of prerequisites – Test Readiness Review (TRR) 61.1.8 Delivery of datasets (TDS) 71.1.9 Go-NoGo decision – Test Review Board (TRB) 71.1.10 Continuous delivery and deployment 81.2 Tracking test projects 91.3 Risks and systems-of-systems 101.4 Particularities related to SoS 111.5 Particularities related to SoS methodologies 111.5.1 Components definition 121.5.2 Testing and quality assurance activities 121.6 Particularities related to teams 12Chapter 2 Testing Process 152.1 Organization 172.2 Planning 182.2.1 Project WBS and planning 192.3 Control of test activities 212.4 Analyze 222.5 Design 232.6 Implementation 242.7 Test execution 252.8 Evaluation 262.9 Reporting 282.10 Closure 292.11 Infrastructure management 292.12 Reviews 302.13 Adapting processes 312.14 RACI matrix 322.15 Automation of processes or tests 332.15.1 Automate or industrialize? 332.15.2 What to automate? 332.15.3 Selecting what to automate 34Chapter 3 Continuous Process Improvement 373.1 Modeling improvements 373.1.1 PDCA and IDEAL 383.1.2 CTP 393.1.3 SMART 413.2 Why and how to improve? 413.3 Improvement methods 423.3.1 External/internal referential 423.4 Process quality 463.4.1 Fault seeding 463.4.2 Statistics 463.4.3 A posteriori 473.4.4 Avoiding introduction of defects 473.5 Effectiveness of improvement activities 483.6 Recommendations 50Chapter 4 Test, QA or IV&V Teams 514.1 Need for a test team 524.2 Characteristics of a good test team 534.3 Ideal test team profile 544.4 Team evaluation 554.4.1 Skills assessment table 564.4.2 Composition 584.4.3 Select, hire and retain 594.5 Test manager 594.5.1 Lead or direct? 604.5.2 Evaluate and measure 614.5.3 Recurring questions for test managers 624.6 Test analyst 634.7 Technical test analyst 644.8 Test automator 654.9 Test technician 664.10 Choose our testers 664.11 Training, certification or experience? 674.12 Hire or subcontract? 674.12.1 Effective subcontracting 684.13 Organization of multi-level test teams 684.13.1 Compliance, strategy and organization 694.13.2 Unit test teams (UT/CT) 704.13.3 Integration testing team (IT) 704.13.4 System test team (SYST) 704.13.5 Acceptance testing team (UAT) 714.13.6 Technical test teams (TT) 714.14 Insourcing and outsourcing challenges 724.14.1 Internalization and collocation 724.14.2 Near outsourcing 734.14.3 Geographically distant outsourcing 74Chapter 5 Test Workload Estimation 755.1 Difficulty to estimate workload 755.2 Evaluation techniques 765.2.1 Experience-based estimation 765.2.2 Based on function points or TPA 775.2.3 Requirements scope creep 795.2.4 Estimations based on historical data 805.2.5 WBS or TBS 805.2.6 Agility, estimation and velocity 815.2.7 Retroplanning 825.2.8 Ratio between developers – testers 825.2.9 Elements influencing the estimate 835.3 Test workload overview 855.3.1 Workload assessment verification and validation 865.3.2 Some values 865.4 Understanding the test workload 875.4.1 Component coverage 875.4.2 Feature coverage 885.4.3 Technical coverage 885.4.4 Test campaign preparation 895.4.5 Running test campaigns 895.4.6 Defects management 905.5 Defending our test workload estimate 915.6 Multi-tasking and crunch 925.7 Adapting and tracking the test workload 92Chapter 6 Metrics, KPI and Measurements 956.1 Selecting metrics 966.2 Metrics precision 976.2.1 Special case of the cost of defaults 976.2.2 Special case of defects 986.2.3 Accuracy or order of magnitude? 986.2.4 Measurement frequency 996.2.5 Using metrics 996.2.6 Continuous improvement of metrics 1006.3 Product metrics 1016.3.1 FTR: first time right 1016.3.2 Coverage rate 1026.3.3 Code churn 1036.4 Process metrics 1046.4.1 Effectiveness metrics 1046.4.2 Efficiency metrics 1076.5 Definition of metrics 1086.5.1 Quality model metrics 1096.6 Validation of metrics and measures 1106.6.1 Baseline 1106.6.2 Historical data 1116.6.3 Periodic improvements 1126.7 Measurement reporting 1126.7.1 Internal test reporting 1136.7.2 Reporting to the development team 1146.7.3 Reporting to the management 1146.7.4 Reporting to the clients or product owners 1156.7.5 Reporting to the direction and upper management 116Chapter 7 Requirements Management 1197.1 Requirements documents 1197.2 Qualities of requirements 1207.3 Good practices in requirements management 1227.3.1 Elicitation 1227.3.2 Analysis 1237.3.3 Specifications 1237.3.4 Approval and validation 1247.3.5 Requirements management 1247.3.6 Requirements and business knowledge management 1257.3.7 Requirements and project management 1257.4 Levels of requirements 1267.5 Completeness of requirements 1267.5.1 Management of TBDs and TBCs 1267.5.2 Avoiding incompleteness 1277.6 Requirements and agility 1277.7 Requirements issues 128Chapter 8 Defects Management 1298.1 Defect management, MOA and MOE 1298.1.1 What is a defect? 1298.1.2 Defects and MOA 1308.1.3 Defects and MOE 1308.2 Defect management workflow 1318.2.1 Example 1318.2.2 Simplify 1328.3 Triage meetings 1338.3.1 Priority and severity of defects 1338.3.2 Defect detection 1348.3.3 Correction and urgency 1358.3.4 Compliance with processes 1368.4 Specificities of TDDs, ATDDs and BDDs 1368.4.1 TDD: test-driven development 1368.4.2 ATDD and BDD 1378.5 Defects reporting 1388.5.1 Defects backlog management 1398.6 Other useful reporting 1418.7 Don’t forget minor defects 141Chapter 9 Configuration Management 1439.1 Why manage configuration? 1439.2 Impact of configuration management 1449.3 Components 1459.4 Processes 1459.5 Organization and standards 1469.6 Baseline or stages, branches and merges 1479.6.1 Stages 1489.6.2 Branches 1489.6.3 Merge 1489.7 Change control board (CCB) 1499.8 Delivery frequencies 1499.9 Modularity 1509.10 Version management 1509.11 Delivery management 1519.11.1 Preparing for delivery 1539.11.2 Delivery validation 1549.12 Configuration management and deployments 155Chapter 10 Test Tools and Test Automation 15710.1 Objectives of test automation 15710.1.1 Find more defects 15810.1.2 Automating dynamic tests 15910.1.3 Find all regressions 16010.1.4 Run test campaigns faster 16110.2 Test tool challenges 16110.2.1 Positioning test automation 16210.2.2 Test process analysis 16210.2.3 Test tool integration 16210.2.4 Qualification of tools 16310.2.5 Synchronizing test cases 16410.2.6 Managing test data 16410.2.7 Managing reporting (level of trust in test tools) 16510.3 What to automate? 16510.4 Test tooling 16610.4.1 Selecting tools 16710.4.2 Computing the return on investment (ROI) 16910.4.3 Avoiding abandonment of tools and automation 16910.5 Automated testing strategies 17010.6 Test automation challenge for SoS 17110.6.1 Mastering test automation 17110.6.2 Preparing test automation 17310.6.3 Defect injection/fault seeding 17310.7 Typology of test tools and their specific challenges 17410.7.1 Static test tools versus dynamic test tools 17510.7.2 Data-driven testing (DDT) 17610.7.3 Keyword-driven testing (KDT) 17610.7.4 Model-based testing (MBT) 17710.8 Automated regression testing 17810.8.1 Regression tests in builds 17810.8.2 Regression tests when environments change 17910.8.3 Prevalidation regression tests, sanity checks and smoke tests 17910.8.4 What to automate? 18010.8.5 Test frameworks 18210.8.6 E2E test cases 18310.8.7 Automated test case maintenance or not? 18410.9 Reporting 18510.9.1 Automated reporting for the test manager 186Chapter 11 Standards and Regulations 18711.1 Definition of standards 18911.2 Usefulness and interest 18911.3 Implementation 19011.4 Demonstration of compliance – IADT 19011.5 Pseudo-standards and good practices 19111.6 Adapting standards to needs 19111.7 Standards and procedures 19211.8 Internal and external coherence of standards 192Chapter 12 Case Study 19512.1 Case study: improvement of an existing complex system 19512.1.1 Context and organization 19612.1.2 Risks, characteristics and business domains 19812.1.3 Approach and environment 20012.1.4 Resources, tools and personnel 21012.1.5 Deliverables, reporting and documentation 21212.1.6 Planning and progress 21312.1.7 Logistics and campaigns 21612.1.8 Test techniques 21712.1.9 Conclusions and return on experience 218Chapter 13 Future Testing Challenges 22313.1 Technical debt 22313.1.1 Origin of the technical debt 22413.1.2 Technical debt elements 22513.1.3 Measuring technical debt 22613.1.4 Reducing technical debt 22713.2 Systems-of-systems specific challenges 22813.3 Correct project management 22913.4 DevOps 23013.4.1 DevOps ideals 23113.4.2 DevOps-specific challenges 23113.5 IoT (Internet of Things) 23213.6 Big Data 23313.7 Services and microservices 23413.8 Containers, Docker, Kubernetes, etc 23513.9 Artificial intelligence and machine learning (AI/ML) 23513.10 Multi-platforms, mobility and availability 23713.11 Complexity 23813.12 Unknown dependencies 23813.13 Automation of tests 23913.13.1 Unrealistic expectations 24013.13.2 Difficult to reach ROI 24113.13.3 Implementation difficulties 24213.13.4 Think about maintenance 24313.13.5 Can you trust your tools and your results? 24413.14 Security 24513.15 Blindness or cognitive dissonance 24513.16 Four truths 24613.16.1 Importance of Individuals 24713.16.2 Quality versus quantity 24713.16.3 Training, experience and expertise 24813.16.4 Usefulness of certifications 24813.17 Need to anticipate 24913.18 Always reinvent yourself 25013.19 Last but not least 250Terminology 253References 261Index 267Summary of Volume 1 269