Proactive User Acceptance Testing – Confident Competence – The Testing Users Need to Be Confident the Software they Depend on Works

Spring 2018 Course – Canceled

Date: Thursday, June 21, 2018

Time: 8:30 am – 5:00 pm

Decision date: Thursday, June 14, 2018

Early Registration Date deadline: Thursday, June 7, 2018

Before Early Registration Date:

Members $235
Non-members $260

After Early Registration Date:

Members $260
Non-members $280

WHERE: Woburn/Burlington Area

Phone 781-245-5405

email sec.boston@ieee.org

If paying by check, the check must be received before the appropriate dates for Early Registration and Decision Dates.

Make Checks payable and send to:
IEEE Boston Section
One Centre Street, Suite 203
Wakefield, MA 01880

Speaker: Robin Goldsmith, President, GoPro Management

Projects aren’t complete until users/customers are sure the systems they depend on actually meet business requirements, work properly, and truly help them do their jobs efficiently and effectively. However, users seldom are confident or comfortable testing system acceptability. Project Managers and Testing professionals need to know how to guide and facilitate effective acceptance testing without usurping the user’s primary role. This intensive interactive seminar shows what users need to know to confidently make the best use of their time planning and conducting acceptance tests that catch more defects at the traditional tail-end of development, while also contributing in appropriate ways to reducing the number of errors that get through the development process for them to catch in UAT. Exercises give practice using practical methods and techniques.

Participants will learn:

* Appropriate testing roles for users, developers, and professional testers; and what each shouldn’t test.
* How Proactive Testing throughout the life cycle reduces the number of errors left to find in UAT.
* Key testing concepts, techniques, and strategies that facilitate adaptation to your situation.
* Systematically expanding acceptance criteria to an acceptance test plan, test designs, and test cases.
* Supplementing with requirements-based tests, use cases, and high-level structural white box tests.
* Techniques for obtaining/capturing test data and carrying out acceptance tests.

WHO SHOULD ATTEND: This course has been designed for business managers and system users responsible for conducting user acceptance testing of systems they must depend on, as well as for system and project managers, analysts, developers, quality/testing professionals, and auditors.

ROLE OF USER ACCEPTANCE TESTING
Why users may resist involvement
Making users confident about testing
Objectives, types, and scope of testing
Acceptance testing as user’s self-defense
Why technical tests don’t catch all the errors
Essential elements of effective testing
CAT-Scan Approach to find more errors
Proactive Testing Life Cycle model
Separate technical and acceptance test paths
Place of UAT in overall test structure
Making sure important tests are done first
Developer/tester/user test responsibilities

DEFINING ACCEPTANCE CRITERIA
Defining acceptance test strategy up-front
Source and role of acceptance criteria
5 elements criteria should address
Functionality the user must demonstrate
How much, how often user must test
Determining system quality
Who should carry out acceptance tests
How acceptance tests should be performed
Added benefit, revealing requirements errors

DESIGNING ACCEPTANCE TEST PLANS
Expanding the acceptance criteria
Allocating criteria to system design
Refining the design to catch oversights
Checklist of common problems to test
Equivalence classes and boundary values
Making quality factors (attributes) testable
Structural testing applicable to users
GUI features that always need to be tested
Defining requirements-based tests
Constructing use cases
Cautions about use case pitfalls
One- and two-column use case formats
Turning use cases into tests
Consolidating tests into efficient test scripts

CARRYING OUT ACCEPTANCE TESTS
Differentiating test cases and test data
Traps that destroy value of acceptance tests
Warning about conversions
Documentation, training, Help tests
Configuration, installation, localization
Security, backup, recovery tests
Suitability of automating acceptance testing
Performance, stress, load testing
Issues on creating test conditions, data
Capturing results, determining correctness
User’s defect tracking and metrics