Wednesday, March 26, 2008


1 : With thorough testing it is possible to remove all defects from a program prior to delivery to the customer

a. True

b. False

ANSWER : b

2 : Which of the following are characteristics of testable software ?

a. observability

b. simplicity

c. stability

d. all of the above

ANSWER : d

3 : The testing technique that requires devising test cases to demonstrate that each program function is operational is called

a. black-box testing

b. glass-box testing

c. grey-box testing

d. white-box testing

ANSWER : a

4 : The testing technique that requires devising test cases to exercise the internal logic of a software module is called

a. behavioral testing

b. black-box testing

c. grey-box testing

d. white-box testing

ANSWER : d

5 : What types of errors are missed by black-box testing and can be uncovered by white-box testing ?

a. behavioral errors

b. logic errors

c. performance errors

d. typographical errors

e. both b and d

ANSWER : e

6 : Program flow graphs are identical to program flowcharts.

a. True

b. False

ANSWER : b

7 : The cyclomatic complexity metric provides the designer with information regarding the number of

a. cycles in the program

b. errors in the program

c. independent logi

c paths in the program

d. statements in the program

ANSWER : c

8 : The cyclomatic complexity of a program can be computed directly from a PDL representation of an algorithm without drawing a program flow graph.

a. True

b. False

ANSWER : a

9 : Condition testing is a control structure testing technique where the criteria used to design test cases is that they

a. rely on basis path testing

b. exercise the logical conditions in a program module

c. select test paths based on the locations and uses of variables

d. focus on testing the validity of loop constructs

ANSWER : b

10 : Data flow testing is a control structure testing technique where the criteria used to design test cases is that they

a. rely on basis path testing

b. exercise the logical conditions in a program module

c. select test paths based on the locations and uses of variables

d. focus on testing the validity of loop constructs

ANSWER : c

11 : Loop testing is a control structure testing technique where the criteria used to design test cases is that they

a. rely basis path testing

b. exercise the logical conditions in a program module

c. select test paths based on the locations and uses of variables

d. focus on testing the validity of loop constructs

ANSWER : d

12 : Black-box testing attempts to find errors in which of the following categories

a. incorrect or missing functions

b. interface errors

c. performance errors

d. all of the above

e. none of the above

ANSWER : d

13 : Graph-based testing methods can only be used for object-oriented systems

a. True

b. False

ANSWER : b

14 : Equivalence testing divides the input domain into classes of data from which test cases can be derived to reduce the total number of test cases that must be developed.

a. True

b. False

ANSWER : a

15 : Boundary value analysis can only be used to do white-box testing.

a. True

b. False

ANSWER : b

16 : Comparison testing is typically done to test two competing products as part of customer market analysis prior to product release.

a. True

b. False

ANSWER : b

17 : Orthogonal array testing enables the test designer to maximize the coverage of the test cases devised for relatively small input domains.

a. True

b. False

ANSWER : a

18 : Test case design "in the small" for OO software is driven by the algorithmic detail ofthe individual operations.

a. True

b. False

ANSWER : a

19 : Encapsulation of attributes and operations inside objects makes it easy to obtain object state information during testing.

a. True

b. False

ANSWER : b

20 : Use-cases can provide useful input into the design of black-box and state-based tests of OO software.

a. True

b. False

ANSWER : a

21 : Fault-based testing is best reserved for

a. conventional software testing

b. operations and classes that are critical or suspect

c. use-case validation

d. white-box testing of operator algorithms

ANSWER : b

22 : Testing OO class operations is made more difficult by

a. encapsulation

b. inheritance

c. polymorphism

d. both b andc

ANSWER : d

23 : Scenario-based testing

a. concentrates on actor and software interaction

b. misses errors in specificationsc. misses errors in subsystem interactionsd. both a and b

ANSWER : a

24 : Deep structure testing is not designed to

a. examine object behaviors

b. exercise communication mechanisms

c. exercise object dependencies

d. exercise structure observable by the user

ANSWER : d

25 : Random order tests are conducted to exercise different class instance life histories.

a. True

b. False

ANSWER : a

26 : Which of these techniques is not useful for partition testing at the class level

a. attribute-based partitioning

b. category-based partitioning

c. equivalence class partitioning

d. state-based partitioning

ANSWER : c

27 : Multiple class testing is too complex to be tested using random test cases.

a. True

b. False

ANSWER : b

28 : Tests derived from behavioral class models should be based on the

a. data flowdiagram

b. object-relation diagram

c. state diagram

d. use-case diagram

ANSWER : c

29 : Client/server architectures cannot be properly tested because network load is highly variable.

a. True

b. False

ANSWER : b

30 : Real-time applications add a new and potentially difficult element to the testing mix

a. performance

b. reliability

c. security

d. time

ANSWER : d
31. What is the meaning of COSO ?

a. Common Sponsoring Organizations

b. Committee Of Sponsoring Organizations

c. Committee Of Standard Organizations

d. Common Standard Organization

e. None of the above

ANSWER : b

32. Which one is not key term used in internal control and security

a. Threat

b. Risk Control

c. Vulnerability

d. Exposure

e. None

ANSWER : c

33. Management is not responsible for an organization internal control system

a. True

b. False

ANSWER : b

34. Who is ultimate responsible for the internal control system

a. CEO

b. Project Manager

c. Technical Manager

d. Developere. Tester

ANSWER : a

35. Who will provide important oversight to the internal control system

a. Board of Directors

b. Audit Committee

c. Accounting Officers

d. Financial Officers

e. both a & b

f. both c & d

ANSWER : e

36. The sole purpose of the Risk Control is to avoid risk

a. True

b. False

ANSWER : b

37. Management controls involves limiting access to computer resources

a. True

b. False

ANSWER : a

38. Software developed by contractors who are not part of the organization is referred to as in sourcing organizations

a. True

b. False

ANSWER : b

39. Which one is not tester responsibilities ?

a. Assure the process for contracting software is adequate

b. Review the adequacy of the contractors test plan

c. Perform acceptance testing on the software

d. Assure the ongoing operation and maintenance of the contracted software

e. None of the above

ANSWER : a

40. The software tester may or may not be involved in the actual acceptance testing

a. True

b. False

ANSWER : a

41. In the client systems, testing should focus on performance and compatibility

a. True

b. False

ANSWER : b

42. A database access applications typically consists of following elements except

a. User Interface code

b. Business login code

c. Data-access service code

d. Data Driven code

ANSWER : d

43. Wireless technologies represent a rapidly emerging area of growth and importance for providing ever-present access to the internet and email.

a. True

b. False

ANSWER : a

44. Acceptance testing involves procedures for identifying acceptance criteria for interim life cycle products and for accepting them.

a. True

b. False

ANSWER : a

45. Acceptance testing is designed whether or not the software is "fit" for the user to use. The concept of "fit" is important in both design and testing. There are four components of "fit".

a. True

b. False

ANSWER : a

46. Acceptance testing occurs only at the end point of the development process; it should be an ongoing activity that test both interim and final products.

a. True

b. False

ANSWER : b

47. Acceptance requirement that a system must meet can be divided into ________ categories

a. Two

b. Three

c. Four

d. Five

ANSWER : c

18. _______ categories of testing techniques can be used in acceptance testing.

a. Two

b. Three

c. Four

d. Five

ANSWER : a

49. _____________ define the objectives of the acceptance activities and a plan for meeting them.

a. Project Manager

b. IT Manager

c. Acceptance Manager

d. ICOANSWER : c

50. Software Acceptance testing is the last opportunity for the user to examine the software for functional, interface, performance, and quality features prior to the final acceptance review.

a. True

b. False

ANSWER : a

Wednesday, March 19, 2008

Software Testing Dictionary

Acceptance Testing
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Accessibility Testing
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

Ad Hoc Testing
A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

Agile Testing
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

Automated Software Quality (ASQ)
The use of software tools, such as automated testing tools, to improve software quality.

Basis Path Testing
A white box test case design technique that uses the algorithmic flow of the program to design tests.

Basis Set
The set of tests derived using basis path testing.

Beta Testing
Testing of a re-release of a software product conducted by customers.

Binary Portability Testing
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Black Box Testing
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Bottom Up Testing
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Testing
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Bug
A fault in a program which causes the program to perform in an unintended or unanticipated manner.

Boundary Value Analysis
BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

Branch Testing
Testing in which all branches in the program source code are tested at least once.

Breadth Testing
A test suite that exercises the full functionality of a product but does not test features in detail.

CAST
Computer Aided Software Testing.

Capture/Replay Tool
A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

CMM
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Cause Effect Graph
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

Code Complete
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Compatibility Testing
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Concurrency Testing
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

Conformance Testing
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

Context Driven Testing
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

Conversion Testing
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Cyclomatic Complexity
A measure of the logical complexity of an algorithm, used in white-box testing.

Data Driven Testing
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

Defect
Nonconformance to requirements or functional / program specification

Dependency Testing
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing
A test that exercises a feature of a product in full detail.

Dynamic Testing
Testing software through executing it. See also Static Testing.

Endurance Testing
Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Class
A portion of a component's input or output domains for which the component's behavior is assumed to be the same from the component's specification.

Equivalence Partitioning
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Exhaustive Testing
Testing which covers all combinations of input values and preconditions for an element of the software under test.

Functional Decomposition
A technique used during planning, analysis and design; creates a functional hierarchy for the software.

Functional Specification
A document that describes in detail the characteristics of the product with regard to its intended features.

Functional Testing
· Testing the features and operational behavior of a product to ensure they correspond to its specifications.
· Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

Gorilla Testing
Testing one particular module,functionality heavily.

Gray Box Testing
A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

High Order Tests
Black-box tests conducted once the software has been integrated.

Independent Test Group (ITG)
A group of people whose primary responsibility is software testing.

Inspection
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

Integration Testing
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Installation Testing
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Localization Testing
This term refers to making software specifically designed for a specific locality.

Loop Testing
A white box testing technique that exercises program loops.

Monkey Testing
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

Negative Testing
Testing aimed at showing software does not work. Also known as "test to fail".

Path Testing
Testing in which all paths in the program source code are tested at least once.

Performance Testing
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users.

Positive Testing
Testing aimed at showing software works. Also known as "test to pass".

Quality Assurance
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

Quality Audit
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Control
The operational techniques and the activities used to fulfill and verify requirements of quality.

Ramp Testing
Continuously raising an input signal until the system breaks down.

Recovery Testing
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Regression Testing
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

Release Candidate
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

Sanity Testing
Brief test of major functional elements of a piece of software to determine if its basically operational.

Scalability Testing
Performance testing focused on ensuring the application under test gracefully handles increases in work load.

Security Testing
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Soak Testing
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

Software Testing
A set of activities conducted with the intent of finding errors in software.

Static Analysis
Analysis of a program carried out without executing the program.

Static Analyzer
A tool that carries out static analysis.

Static Testing
Analysis of a program carried out without executing the program.

Storage Testing
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress Testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

Structural Testing
Testing based on an analysis of internal workings and structure of a piece of software.

System Testing
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

Testability
The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

Testing
· The process of exercising software to verify that it satisfies specified requirements and to detect errors.
· The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
· The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

Test Bed
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case
· Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
· A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development
Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver
A program or test tool used to execute a tests. Also known as a Test Harness.

Test Environment
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test First Design
Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

Test Harness
A program or test tool used to execute a tests.

Test Plan
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.

Test Procedure
A document providing detailed instructions for the execution of one or more test cases.

Test Script
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Test Suite
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Test Tools
Computer programs used in the testing of a system, a component of the system, or its documentation.

Thread Testing
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top Down Testing
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Management
A company commitment to develop a process that achieves high quality product and customer satisfaction.

Traceability Matrix
A document showing the relationship between Test Requirements and Test Cases.

Usability Testing
Testing the ease with which users can learn and use a product.

Use Case
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

Unit Testing
Testing of individual software components.

Validation
The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.

Verification
The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Volume Testing
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

Walkthrough
A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.

White Box Testing
Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing.

Workflow Testing
Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

Test Paper V

1. Which communication skill will be neglected by most
a. Reading
b. Listening
c. Writing

2. Therapeutic listening is
a. Sympathetic listening
b. Listening to pieces of information…

3. Which model demonstrates relation between 2 or more parameters of effort, duration or resource?
a. Cost
b. Constraint
c. Function Point

4. In which model expertise can be used to estimate cost
a. Top-Down
b. Expert Judgment
c. Bottom-Up

5. Two objective questions on responsibility like who is responsible in issuing IT policy, work policy etc.

6. Fit for use is
a. Transcendent
b. Product Based
c. User Based
d. Value Based

7. Re-Use of data is done in which type of testing (Similar type 2 questions on retesting and regression testing)
a. Capture/Play back
b. System Testing
c. Regression Testing
d. Integration Testing

8. one question each on configuration management / Change Management / Version Control.

9. In Acceptance testing, which data is used.
a. Test Cases
b. Use Case
c. Test Plan

10. In four components of FIT, reliability is included in (Similar type 2 qs)
a. Data
b. People
c. Structure
d. Rules

11. Obligations of both contractual parties should be spelled out in
a. What is done
b. Who does it
c. When it is done
d. How it is done

12. Dates on which Obligations need to be filled out should be specified in
a. What is done
b. Who does it
c. When it is done
d. How it is done

13. Two questions on Internal Auditor and Internal Control responsibilities

14. One question on ERM model

15. One question on Control Frame Work Model

16. Two questions on CMM 5 levels of maturity (Like in which level controls are implemented).

Subjective Questions:

1. You gave the software for independent testers. You are responsible for Unit, Integration, System and Acceptance testing. Explain about each testing methods and tell which testing can be given to Independent Testing and Which for development team.

2. Tell about any 3 tools and vendor of the tools.

3. One question on Optimum testing.

4. You developed Risk Plan, Test Plan, Test Scripts. You are doing testing. At this point you got major requirements change. What changes are required to in-corporate these changes in your plan.

5. Explain about
Complexity Measures
Data Flow Analysis
Symbolic Execution

6. Acceptance Test Plan for “Inventory Control Software” contents and explanation.

7. What are CSFs. What are the CSFs that u ll look for in a contactor delivered software. Define them.

8. Security Vulnerabilities for e-comm application.

9. E-commerce project is newly developed in your organization. You are not able to test all types of Operating Systems and Browsers. Prepare mitigation plan.

10. Explain V-Model.

11. Reliability and maintainability are certain Quality factors given explicitly in the requirements. If so, list a few other QFs for a web based project and write the rationale for selecting the same.

12. 5 important things you consider for writing test plan, why do you think they are important.

13. The UI for a defect management tool.

14. Difference between Acceptance and System Testing.

15. Mention and write on a few techniques for the defect prevention - internal control testing.

16. 3 important issues for wireless technology.

17. One question on Pareto Charts - Like gave a scenario for no. of critical, minor and major bugs and no. of days to fix the bugs. Analyze the scenario.

18. What are important Quality factors that u will test for in a multiple workstation scenario that u will not do in a single work station scenario.

19. The contents of a system test report.

20. List 5 test metrics and explain how u can use them.

21. Which steps in a testing process are defect prone- explain why?

22. What according to you are the important docs that u would refer when u r testing a change that has been made in a project that has been released.(i.e. operational)

23. What do u mean by defect expectation. How can u use it for improving the testing process.

24. Mention three techniques for unit testing. State an objective for unit testing, Based on those, how will ensure that the unit testing has complied to the expectations.

25. Steps involved in testing for security.

26. What are the important things that u ll look for during a demo of a contractor software.

27. How do u use control charts for controlling the testing process. Explain control charts.

28. If the code to be delivered will be delivered after a week but no change in release date, how will you as test manager plan yr test. (a question that has appeared in lot of previous question papers. the phrasing is wrong here. but the same)

29. As a test manager - test skill set relates question.

test Paper IV CSTE September/December 2005

Objective Paper1:

Q1. Who defined the standards?
A) ISO
B) QAI
Ans. ISO

Q2. Juran is famous for
A) Quality Control
B) Management
Ans. Quality Control

Q3. Which one is not Statistical Tool?
A) Cause & effect Graphing
B) Stratification
C) Run Chart
D) Regression Analysis
Ans. Cause & effect Graphing

Q4. Histogram refers to
A) Bar Chart
B) Run Chart
C) Pareto
Ans. Bar Chart

Q5. Who are there in External IT TEAM?
A) Non Developer
B) Customer/ User

Q6. Which one is not Structural Testing?
A) Regression
B) Parallel
c) Acceptance
d) Stress
Ans. Acceptance

Q7. Who is not part of Inspection?
A) Prj Manager
B) Author
c) Moderator
d) Reader
e) Inspetor
Ans. Prj Manager

Q8. Tester job is not to
A) Report Defect
b) Who entered the defect in system
Ans. Who entered the defect in system

Q9. Which one is not secondary role of tester
A) Raising Issues
b) Instilling Confidence
c) cts improving process
d) Insight
e) developer work
Ans. developer work

Q10. Acceptance testing is
A) White Box
b) Black box
c) White box & black Box
d) none of the above
Ans. Black box

Q11. Deming 14 principle includes
A) mobility of mgmt
b) new philosphy
c) adobt leadership
d) both b& c
e) both a & c
Ans. both b& c

Q12 Configuration Management tool used in which phase.
A) Unit tetsing
b) Integration
c) accepting
d) all the phases
Ans. all the phases

Q13. Max Defects created in which phases
A) req
b) design
c) Implementaion
d) Coding
Ans. Req

Q14. 50% Defect found in which phase
A) req
b) design
c) Implementaion
d) Coding
Ans. req

Q15. defects are least costly in which phase
A) req
b) design
c) Implementaion
d) Coding
Ans. req

Some questions from Paper2 are below:

1) Software inspections categorize defects as Wrong, Missing and Extra.
A) TRUE
B) FALSE

2) The purpose of Risk Management in a project is to
A) Eliminate Risks
B) Minimize Risks
C) Avoid risks
D) Anticipate the risks

3) Testing of the system to demonstrate system compliance with user requirements is:
A) Black box testing
B) System testing
C) Independent testing
D) Acceptance Testing

4) Function point is a measure of
A) Effort
B) Complexity
C) Usability
D) None of the above

5) An activity that verifies compliance with policies and procedures and ensures that resources are conserved is
A) an inspection
B) an audit
C) a review
D) an assessment

6) Which is the application for the process management and quality improvement concepts to software development and maintenance.
A) Malcolm Baldridge
B) ISO 9000
C) QAI
D) QS14000

7) Software testing accounts for what percent of software development costs?
A) 10-20
B) 30-60
C) 70-80

8) Software errors are least costly to correct at what stage of the development cycle?
A) Requirements
B) Construction
C) Acceptance test
D) Design

9) Which of the following test approaches is not a Functional test approach?
A) Control Technique
B) Stress Technique
C) Regression Technique
D) Cause/effect Graphing
E) Requirements

10) Effectiveness is doing things right and efficiency is doing the right things
A) True
B) False

11) Juran is famous for
A) Quality Control
B) Quality Assurance
C) Trend Analysis

12) Top down & Bottom Up are the part of Incremental Testing
A) True
B) False

13) Achieving quality requires:
A) Understanding the customers expectations
B) Exceeding the customers expectations
C) Meeting all the definitions of quality
D) Focusing on the customer
E.) All the above

14) Which is NOT the exit criteria for unit testing

Objective Paper3:

Q1. there is a application which is delivered to the customer now customer found bugs and assume each bug cost is $125 and there are 4 bugs found each day and there are 5 working day. What should be the cost to fix those bugs
A) $300
b) $25000
c) $ 15000
d) $ 13,000
e) $ 26,000
Ans $2500

Q2. Tester should know
A) Test planning
b) Automation tool
c) Defect tracking tool
d) programming language
Ans. Defect Tracking tool

Q3. Pareto Analysis
A) 80-20 rule
b) Trend analysis
Ans. 80-20 Rule

Q4. Testing efforts in SDLC
A) 10-20 %
b) 30-60%
c) 60-80%
d) 80-90%
Ans 30-60%

Q5. Critical Listening is
A) Listen analyses what speaker said
b) Listen what is required
c) Listen only summary
Ans. Listen analyses what speaker said

Q6. Who finds vulnuralibility in system?
A) ICO
b) ISS
c) It Manager
Ans. ICO

Q7. COQ is not
A) Production
b) Appraisal
c) Prevention
d) Failure
Ans. Production

Q8. Training come under what category cost of quality?
A) Appraisal
b) Preventive
c) Failure
d) none of the above
Ans. Preventive

Q9. Why testing is called negative
A) Easy
b) delay in implementation
c) No training reqd

Q10. Incremental testing is Top down & bottom Up testing
A)True B) false

Q11. Effectiveness is doing right thing & efficency is doing thing right
True/ false
Ans False

Q12. Which one is not part of Demind
A) Plan
b) do
c) check
d) act
e) risk analysis
Ans. risk analysis

Q13. Defect fixed in which phase is lease cost
A) same phase
b) production
c) next phase
d) none
Ans. same phase

Q14. In White box testing test coverage is
a) Decision
b) Statement
c) branch
d) modified decision
e) user specified data coverage
Ans. Decision

Q15. Test Plan should not contain
A) Scope
b) Objective
c) Policy
d) Risk Analysis
Ans. policy

Q16. This question was on Test coverage?

Q17. Acceptance Testing is the responsibility of the:
A) Programmer
B) Project Leader
C) Independent Tester
D) Assistance programmer
E) User/Customer

Q18. Cost of Quality is least among
A) Prevention
B) Appraisal
C) Failure

Subjective Paper1: (Most of the subjective questions were from the back of CSTE book.)

1. Why we need to test the software.

2. Write the procedure for Test Plan, Test Script, Test Status Report.

3. You have to test the Web Application, what should be considered for the same.

4. Write test cases for the Data model & data field validation.

5. There is applications which can be accessible by one terminal & other by multiple terminals, write the points which won’t be considered for the Single terminal Application (Test factors)

6. Write the Test Strategy.

Subjective Paper 2:

1. Differentiate between Verification & validation. 15 marks

2. Define Test Efficiency & test Effectiveness. 15 marks

3. Your client would like to do Acceptance test on one of your projects but does not know to prepare a test plan for that. Describe the components that should be included in the test plan. 30 marks

4. You are the tester in the organization and organization thinks that the tester also introduce defects in the system, write the 5 such defects introduced by tester in the system. 15 marks

5. What is Over Testing? 15 marks

6. You have to do the Unit Testing of the system, define what is Unit testing and how it can be done, what are the types of it also how can you ensure that your application is compliance to the standards. 30 marks

7. Define Equivalence portioning, Boundary value Analysis and Error Guessing with examples. 15 marks

8. Your company is doing a project in automation of aircrafts and timing. You are asked to involve in the testing. What should be the testing strategy for that. One Airline Web Application has to be tested, write test factor or What is to be tested? (Landing & take off) 15 marks

9. Define
a) Boundary Analysis
b) Equivalent Partitioning
c) Error Guessing
Give one example for each.

Subjective Paper 1 - December 2005:

1. Describe four tests you would use to test COTS software.

2. A Web-application is to be installed but suspected that it will might fail in production. Describe four tests you would recommend to your manager to answer the question “How long it would take for the application to recover after it fails”

3. You are the Test Manager of a company, which has outsourced its testing activities. What you would state to your IT Manager in terms of your responsibilities as Test Manager to give him assurance that testing would be conducted properly

4. Describe any four processes you would establish in a test environment and why?

Subjective Paper 2 - December 2005:

1. What would you include in a defect report.

2. What is “v” in v-testing concept and list the stages.

3. Draw a Unit testing workbench, describe each activity in the workbench.

4. You are to introduce automated testing tools. List any four automated test tools with vendor name and why did you choose them.

5. Describe Risk, risk analysis, threat, vulnerability, internal control.

6. Describe statement, branch, condition, expression, path coverage

7. Describe any 3 defect-related metrics.

8. For a web-based application, list 4 important quality factors you would test for. Also state why are they important.

Test Paper III CSTE April 22nd 2006

Objective Questions:

1. Which communication skill will be neglected by most
a. Reading
b. Listening
c. Writing

2. Therapeutic listening is
a. Sympathetic listening
b. Listening to pieces of information…

3. Which model demonstrates relation between 2 or more parameters of effort, duration or resource?
a. Cost
b. Constraint
c. Function Point

4. In which model expertise can be used to estimate cost
a. Top-Down
b. Expert Judgment
c. Bottom-Up

5. Two objective questions on responsibility like who is responsible in issuing IT policy, work policy etc.

6. Fit for use is
a. Transcendent
b. Product Based
c. User Based
d. Value Based

7. Re-Use of data is done in which type of testing (Similar type 2 questions on retesting and regression testing)
a. Capture/Play back
b. System Testing
c. Regression Testing
d. Integration Testing

8. one question each on configuration management / Change Management / Version Control.

9. In Acceptance testing, which data is used.
a. Test Cases
b. Use Case
c. Test Plan

10. In four components of FIT, reliability is included in (Similar type 2 qs)
a. Data
b. People
c. Structure
d. Rules

11. Obligations of both contractual parties should be spelled out in
a. What is done
b. Who does it
c. When it is done
d. How it is done

12. Dates on which Obligations need to be filled out should be specified in
a. What is done
b. Who does it
c. When it is done
d. How it is done

13. Two questions on Internal Auditor and Internal Control responsibilities

14. One question on ERM model

15. One question on Control Frame Work Model

16. Two questions on CMM 5 levels of maturity (Like in which level controls are implemented).

Subjective Questions:

1. You gave the software for independent testers. You are responsible for Unit, Integration, System and Acceptance testing. Explain about each testing methods and tell which testing can be given to Independent Testing and Which for development team.

2. Tell about any 3 tools and vendor of the tools.

3. One question on Optimum testing.

4. You developed Risk Plan, Test Plan, Test Scripts. You are doing testing. At this point you got major requirements change. What changes are required to in-corporate these changes in your plan.

5. Explain about
Complexity Measures
Data Flow Analysis
Symbolic Execution

6. Acceptance Test Plan for “Inventory Control Software” contents and explanation.

7. What are CSFs. What are the CSFs that u ll look for in a contactor delivered software. Define them.

8. Security Vulnerabilities for e-comm application.

9. E-commerce project is newly developed in your organization. You are not able to test all types of Operating Systems and Browsers. Prepare mitigation plan.

10. Explain V-Model.

11. Reliability and maintainability are certain Quality factors given explicitly in the requirements. If so, list a few other QFs for a web based project and write the rationale for selecting the same.

12. 5 important things you consider for writing test plan, why do you think they are important.

13. The UI for a defect management tool.

14. Difference between Acceptance and System Testing.

15. Mention and write on a few techniques for the defect prevention - internal control testing.

16. 3 important issues for wireless technology.

17. One question on Pareto Charts - Like gave a scenario for no. of critical, minor and major bugs and no. of days to fix the bugs. Analyze the scenario.

18. What are important Quality factors that u will test for in a multiple workstation scenario that u will not do in a single work station scenario.

19. The contents of a system test report.

20. List 5 test metrics and explain how u can use them.

21. Which steps in a testing process are defect prone- explain why?

22. What according to you are the important docs that u would refer when u r testing a change that has been made in a project that has been released.(i.e. operational)

23. What do u mean by defect expectation. How can u use it for improving the testing process.

24. Mention three techniques for unit testing. State an objective for unit testing, Based on those, how will ensure that the unit testing has complied to the expectations.

25. Steps involved in testing for security.

26. What are the important things that u ll look for during a demo of a contractor software.

27. How do u use control charts for controlling the testing process. Explain control charts.

28. If the code to be delivered will be delivered after a week but no change in release date, how will you as test manager plan yr test. (a question that has appeared in lot of previous question papers. the phrasing is wrong here. but the same)

29. As a test manager - test skill set relates question.

Please note that I got these questions from some of my friends who appear in CSTE in April 22, 2006. These are not all questions of the paper. These are those questions, which they remember. I'll try to update these as I came to know more...

Test Paper I CSTE Sept 16th 2006

Objective Questions:

1. A question on fit components. Which of the following contributes to fit.

a. Data b. Structure c. People d. Rule e. All of the above

2. Reliability, timeliness, consistency are included in which component of fit

a. Data b. Structure c. People d. Rule

3. Who will develop the test process for software development using new technology?

a. Management b. Project team c. Auditor d. Tester and few more options

4. Which of the following are relatively complete acceptance criteria?

a. Performance should increase b. Response time should be with in 10 sec Few more statements.

5. One question was on "Experienced people can be used as a tool for estimating the cost- Budgeting"

6. The communication type which is rarely emphasized

a. Listening Few more options...

7. There were 2 questions on maturity level e.g which level enforces control for technology.

8. There were 3 question on standard, policy and procedure.

9. Which of the following model has these steps .. event identification , risk assessment, risk response

a. ERM b. COSO internal control framework c. CobiT model

10. Utilize the computer resources to perform their work.. belong to which type of activity specified below

a. Interface b. Development c. Computer operation

11. Obligations of both contractual parties should be spelled out in detail in which part of the contract?

a. What b. Who c. When d. How

12. There was a question on "when the contracted software will be completed"

13. Test planning activity which includes starting and end time for each test milestone.

a. Budgeting b. Scheduling c. Estimation d. Staffing

Subjective Questions:

1. There are many factors like maintainability and reliability among softwares. Give 5 most important software quality factors that u would mention for testing a web based testing?

2. Create a Test plan for a simple project not a complex project?

3. Write a standard template for testplan, testscript, Status report of a project?

4. Design a screen which will show the fields necessary for a defect description? Diagram a defect reporting tool screen and label what all things you would need to mention to log a defect?

5. Question related to usecase like guidelines for writing usecase for customer?

6. Draw a control chart and explain how it can be used to see that the testing process is in control?

7. Difference between system test and acceptance test?

8. Aspects of computer software that should be observed during the demonstration of COTS software?

9. Define unit, integration, system, regression, acceptance testing and explain what would you recommend for the new independent test team formed. Previously in the organization, testing was being performed by software development team?

10. Explain any 4 factors for preventive controls?

11. Question on Security Testing technique. What tests you would include in the testplan for testing security with minimal knowledge about security testing?

12. Risks involved in testing wireless technology and how will u develop controls on it to gain confidence about the wireless technology used.

13. Give MEASUREMENTS for
a] Test Effectiveness and b] Test Efficiency

14. Your Company is about to roll out an E-Commerce application. It is not possible to test the application on all types of browsers on all platforms and operating systems. What steps would you take in the testing environment to reduce the business risks and commercial risks?

15. Developing compatibility and motivation with a test team helps assure effective testing. List at least four guidelines you would follow to develop compatibility and motivation within your test team?

16. In a development project Test planning, resource allocation , test scripting is also completed. Testing is being executed. At this stage if there is a major change in requirement, what will be the tester’s role here (actions in response to the changes) .- The question is already asked in one of the previous year papers .

17. There is a delay of 5 days in the development project. How will the tester handle the testing activities, without changing on resources, working time, …. - The question is already asked in one of the previous year papers.

18. Explanation on total number of defects found Vs defects corrected using Graph.

19. Explain and diagram report which will be used for reporting uncorrected and corrected defects to the development team?