Chapter 1-Introduction



Introduction

What are some recent major computer system failures caused by software bugs?

• A September 2006 news report indicated problems with software utilized in a state government's primary election, resulting in periodic unexpected rebooting of voter checkin machines, which were separate from the electronic voting machines, and resulted in confusion and delays at voting sites. The problem was reportedly due to insufficient testing.

• In August of 2006 a U.S. government student loan service erroneously made public the personal data of as many as 21,000 borrowers on it's web site, due to a software error. The bug was fixed and the government department subsequently offered to arrange for free credit-monitoring services for those affected.

• A software error reportedly resulted in over billing of up to several thousand dollars to each of 11,000 customers of a major telecommunications company in June of 2006. It was reported that the software bug was fixed within days, but that correcting the billing errors would take much longer.

• News reports in May of 2006 described a multi-million dollar lawsuit settlement paid by a healthcare software vendor to one of its customers. It was reported that the customer claimed there were problems with the software they had contracted for, including poor integration of software modules, and problems that resulted in missing or incorrect data used by medical personnel.

• In early 2006 problems in a government's financial monitoring software resulted in incorrect election candidate financial reports being made available to the public. The government's election finance reporting web site had to be shut down until the software was repaired.

• Trading on a major Asian stock exchange was brought to a halt in November of 2005, reportedly due to an error in a system software upgrade. The problem was rectified and trading resumed later the same day.

• A May 2005 newspaper article reported that a major hybrid car manufacturer had to install a software fix on 20,000 vehicles due to problems with invalid engine warning lights and occasional stalling. In the article, an automotive software specialist indicated that the automobile industry spends $2 billion to $3 billion per year fixing software problems.

• Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing the project. In March of 2005 it was decided to scrap the entire project.

• In July 2004 newspapers reported that a new government welfare management system in Canada costing several hundred million dollars was unable to handle a simple benefits rate increase after being put into live operation. Reportedly the original contract allowed for only 6 weeks of acceptance testing and the system was never tested for its ability to handle a rate increase.

• Millions of bank accounts were impacted by errors due to installation of inadequately tested software code in the transaction processing system of a major North American bank, according to mid-2004 news reports. Articles about the incident stated that it took two weeks to fix all the resulting errors, that additional problems resulted when the incident drew a large number of e-mail phishing attacks against the bank's customers, and that the total cost of the incident could exceed $100 million.

• A bug in site management software utilized by companies with a significant percentage of worldwide web traffic was reported in May of 2004. The bug resulted in performance problems for many of the sites simultaneously and required disabling of the software until the bug was fixed.

• According to news reports in April of 2004, a software bug was determined to be a major contributor to the 2003 Northeast blackout, the worst power system failure in North American history. The failure involved loss of electrical power to 50 million customers, forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug was reportedly in one utility company's vendor-supplied power monitoring and management system, which was unable to correctly handle and report on an unusual confluence of initially localized events. The error was found and corrected after examining millions of lines of code.

• In early 2004, news reports revealed the intentional use of a software bug as a counter-espionage tool. According to the report, in the early 1980's one nation surreptitiously allowed a hostile nation's espionage service to steal a version of sophisticated industrial software that had intentionally-added flaws. This eventually resulted in major industrial disruption in the country that used the stolen flawed software.

• A major U.S. retailer was reportedly hit with a large government fine in October of 2003 due to web site errors that enabled customers to view one anothers' online orders.

• News stories in the fall of 2003 stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. The company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the problems.

• In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage company could proceed; the lawsuit reportedly involved claims that the company was not fixing system problems that sometimes resulted in failed stock trades, based on the experiences of 4 plaintiffs during an 8-month period. A previous lower court's ruling that "...six miscues out of more than 400 trades does not indicate negligence." was invalidated.

• In April of 2003 it was announced that a large student loan company in the U.S. made a software error in calculating the monthly payments on 800,000 loans. Although borrowers were to be notified of an increase in their required payments, the company will still reportedly lose $8 million in interest. The error was uncovered when borrowers began reporting inconsistencies in their bills.

• News reports in February of 2003 revealed that the U.S. Treasury Department mailed 50,000 Social Security checks without any beneficiary names. A spokesperson indicated that the missing names were due to an error in a software change. Replacement checks were subsequently mailed out with the problem corrected, and recipients were then able to cash their Social Security checks.

• In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attributed to the difficulty of testing the integration of multiple systems.

• A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems.

• According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash.

• In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'; the trains were started by altering the control system's date settings.

• News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn't work.

• In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.

• A review board concluded that the NASA Mars Polar Lander failed in December 1999 due to software problems that caused improper functioning of retro rockets utilized by the Lander as it entered the Martian atmosphere.

• In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.

• Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.

• In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested.

• A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.

• In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready.

• The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software upgrade intended to speed online trade confirmations.

• In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause was eventually traced to a software bug.

• January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.

• In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by government agencies.

• A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software's inability to handle credit cards with year 2000 expiration dates.

• In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each others' reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to "...unexpectedly high demand from consumers and faulty software that routed the files to the wrong computers."

• In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of the U.S. RBOC's to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that 'It had nothing to do with the integrity of the software. It was human error.'

• On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer.

• Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.

• Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on what he said was a '...funny feeling in my gut', decided the apparent missile attack was a false alarm. The filtering software code was rewritten.

Does every software project need testers?

While all projects will benefit from testing, some projects may not require independent test staff to succeed.

Which projects may not need independent test staff? The answer depends on the size and context of the project, the risks, the development methodology, the skill and experience of the developers, and other factors. For instance, if the project is a short-term, small, low risk project, with highly experienced programmers utilizing thorough unit testing or test-first development, then test engineers may not be required for the project to succeed.

In some cases an IT organization may be too small or new to have a testing staff even if the situation calls for it. In these circumstances it may be appropriate to instead use contractors or outsourcing, or adjust the project management and development approach (by switching to more senior developers and agile test-first development, for example). Inexperienced managers sometimes gamble on the success of a project by skipping thorough testing or having programmers do post-development functional testing of their own work, a decidedly high risk gamble.

For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually necessary. As in any business, the use of personnel with specialized skills enhances an organization's ability to be successful in large, complex, or difficult tasks. It allows for both a) deeper and stronger skills and b) the contribution of differing perspectives. For example, programmers typically have the perspective of 'what are the technical issues in making this functionality work?'. A test engineer typically has the perspective of 'what might go wrong with this functionality, and how can we ensure it meets expectations?'. Technical people who can be highly effective in approaching tasks from both of those perspectives are rare, which is why, sooner or later, organizations bring in test specialists.

Why is it often hard for management to get serious about quality assurance?

Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable: In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord.

Why does software have bugs?

• Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).

• Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity.

• Programming errors - programmers, like anyone else, can make mistakes.

• Changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control

• Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

• Egos - people prefer to say things like:

• 'No problem'

• 'Piece of cake'

• 'I can whip that out in a few hours'

• 'It should be easy to update that old code'

Instead of:

• 'That adds a lot of complexity and we could end up

• Making a lot of mistakes'

• 'We have no idea if we can do that; we'll wing it'

• 'I can't estimate how long it will take, until I

• take a close look at it'

• 'we can't figure out what that old spaghetti code

• did in the first place'

• If there are too many unrealistic 'no problem's', the

result is bugs.

• Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').

• Software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

How can new Software QA processes be introduced in an existing organization?

• A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.

• Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.

• For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.

• The most value for effort will often be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation, or in 'agile'-type environments extensive continuous coordination with end-users, (b) design inspections and code inspections, and (c) post-mortems/retrospectives.

• Other possibilities include incremental self-managed team approaches such as 'Kaizen' methods of continuous process improvement, the Deming-Shewhart Plan-Do-Check-Act cycle, and others.

What is verification? Validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings.

Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

What is a 'walkthrough'?

A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

What's an 'inspection'?

An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often hard for organizations to get serious about quality assurance?'. Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection.

What kinds of testing should be considered?

• Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

• White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.

• Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

• Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

• Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

• Functional testing - black box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

• System testing - black box type testing that is based on overall requirements specifications; covers all combined parts of a system.

• End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

• Sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

• Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

• Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

• Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

• Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

• Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

• Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

• Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

• Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

• Failover testing - typically used interchangeably with 'recovery testing'

• Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

• Compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

• Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

• Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

• Context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.

• User acceptance testing - determining if software is satisfactory to an end-user or customer.

• Comparison testing - comparing software weaknesses and strengths to competing products.

• Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

• Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

• Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

What are 5 common problems in the software development process?

Poor requirements - if requirements are unclear, incomplete, too general, and not testable, there will be problems.

• Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.

• Inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.

• Futurities - requests to pile on new features after development is underway; extremely common.

• Miscommunication - if developers don't know what's needed or customers have erroneous expectations, problems are guaranteed.

What are 5 common solutions to software development problems?

• Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. In 'agile'-type environments, continuous close coordination with customers/end-users is necessary.

• Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.

• Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug fixing. 'Early' testing ideally includes unit testing by developers and built-in testing and diagnostic capabilities.

• Stick to initial requirements as much as possible - be prepared to defend against excessive changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. This will provide them a higher comfort level with their requirements decisions and minimize excessive changes later on.

• Communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - groupware, wiki's, bug-tracking tools and change management tools, intranet capabilities, etc.; insure that information/documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes and/or continuous communication with end-users if possible to clarify expectations.

• What is software 'quality'?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free

What is 'good code'?

'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards.

For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation:

• Minimize or eliminate use of global variables.

• Use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.

• Use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.

• Function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable.

• Function descriptions should be clearly spelled out in comments preceding a function's code.

• Organize code for readability.

• Use whitespace generously - vertically and horizontally

• Each line of code should contain 70 characters max.

• One code statement per line.

• coding style should be consistent throught a program (eg, use of brackets, indentations, naming conventions, etc.)

• in adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code.

• no matter how small, an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation.

• make extensive use of error handling procedures and status and error logging.

• for C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.)

• for C++, keep class methods small, less than 50 lines of code per method is preferable.

• for C++, make liberal use of exception handlers

What is 'good design'?

'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. For programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help; some common rules-of-thumb include:

• The program should act in a way that least surprises the user

• It should always be evident to the user what can be done next and how to exit

• The program shouldn't let the users do something stupid without warning them.

What is the 'software life cycle'?

The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

Will automated testing tools make testing easier?

* Possibly For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable.

* A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms.

* Another common type of approach for automation of functional testing is 'data-driven' or 'keyword-driven' automated testing, in which the test drivers are separated from the data and/or actions utilized in testing (an 'action' would be something like 'enter a value in a text box'). Test drivers can be in the form of automated test tools or custom-written testing software. The data and actions can be more easily maintained - such as via a spreadsheet - since they are separate from the test drivers. The test drivers 'read' the data/action information to perform specified tests. This approach can enable more efficient control, development, documentation, and maintenance of automated tests/test cases.

* Other automated tools can include:

* Code analyzers - monitor code complexity, adherence to standards, etc.

* Coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.

* Memory analyzers - such as bounds-checkers and leak detectors.

* Load/performance test tools - for testing client/server and web applications under various load levels.

* Web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure.

* Other tools - for test case management, documentation management, bug reporting, and configuration management

Manual (a)

Q: How do you introduce a new software QA process?

A: It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be balanced with productivity, in order to keep any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing requirement processes, where the goal is requirements that are clear, complete and

testable.

Q: What is the role of documentation in QA?

A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.

Q: What makes a good test engineer?

A: Good test engineers have a "test to break" attitude. We, good test engineers, take the point of view of the customer; have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process, gives the test engineer an appreciation for the developers' point of view and reduces the learning curve in automated test tool programming.

Rob Davis is a good test engineer because he has a "test to break" attitude, takes the point of view of the customer, has a strong desire for quality, has an attention to detail, He's also tactful and diplomatic and has good a communication skill, both oral and written. And he has previous software development experience, too.

Q: What makes a good resume?

A: On the subject of resumes, there seems to be an unending discussion of whether you should or shouldn't have a one-page resume. The followings are some of the comments I have personally heard:

"Well, Joe Blow (car salesman) said I should have a one-page resume."

"Well, I read a book and it said you should have a one page resume."

"I can't really go into what I really did because if I did, it'd take more than one page on my resume."

"Gosh, I wish I could put my job at IBM on my resume but if I did it'd make my resume more than one page, and I was told to never make the resume more than one page long."

"I'm confused, should my resume be more than one page? I feel like it should, but I don't want to break the rules."

Or, here's another comment, "People just don't read resumes that are longer than one page." I have heard some more, but we can start with these.

So what's the answer? There is no scientific answer about whether a one-page resume is right or wrong. It all depends on who you are and how much experience you have.

The first thing to look at here is the purpose of a resume. The purpose of a resume is to get you an interview. If the resume is getting you interviews, then it is considered to be a good resume. If the resume isn't getting you interviews, then you should change it.

The biggest mistake you can make on your resume is to make it hard to read. Why? Because, for...

One, scanners don't like odd resumes. Small fonts can make your resume harder to read. Some candidates use a 7-point font so they can get the resume onto one page. Big mistake.

Two, resume readers do not like eye strain either. If the resume is mechanically challenging, they just throw it aside for one that is easier on the eyes.

Three, there are lots of resumes out there these days, and that is also part of the problem.

Four, in light of the current scanning scenario, more than one page is not a deterrent because many will scan your resume into their database. Once the resume is in there and searchable, you have accomplished one of the goals of resume distribution.

Five, resume readers don't like to guess and most won't call you to clarify what is on your resume.

Generally speaking, your resume should tell your story. If you're a college graduate looking for your first job, a one-page resume is just fine. If you have a longer story, the resume needs to be longer. Please put your experience on the resume so resume readers can tell when and for whom you did what.

Short resumes -- for people long on experience -- are not appropriate. The real audience for these short resumes is people with short attention spans and low IQs. I assure you that when your resume gets into the right hands, it will be read thoroughly.

Q: What is a test plan?

A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.

Q: What is a test case?

A: A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a...

• Test case identifier;

• Test case name;

• Objective;

• Test conditions/setup;

• Input data requirements/steps, and

• Expected results.

Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.

Q: What should be done after a bug is found?

A: When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.

Q: What is configuration management?

A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts, and can easily adapt to your software tool and process needs.

Q: What if the software is so buggy it can't be tested at all?

A: In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs.

Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem.

Q: What if there isn't enough time for thorough testing?

A: Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects.

Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:

• Which functionality is most important to the project's intended purpose?

• Which functionality is most visible to the user?

• Which functionality has the largest safety impact?

• Which functionality has the largest financial impact on users?

• Which aspects of the application are most important to the customer?

• Which aspects of the application can be tested early in the development cycle?

• Which parts of the code are most complex and thus most subject to errors?

• Which parts of the application were developed in rush or panic mode?

• Which aspects of similar/related previous projects caused problems?

• Which aspects of similar/related previous projects had large maintenance expenses?

• Which parts of the requirements and design are unclear or poorly thought out?

• What do the developers think are the highest-risk aspects of the application?

• What kinds of problems would cause the worst publicity?

• What kinds of problems would cause the most customer service complaints?

• What kinds of tests could easily cover multiple functionalities?

• Which tests will have the best high-risk-coverage to time-required ratio?

Q: What if the project isn't big enough to justify extensive testing?

A: Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under "What if there isn't enough time for thorough testing?" do apply. The test engineer then should do "ad hoc" testing, or write up a limited test plan based on the risk analysis.

Q: What can be done if requirements are changing continuously?

A: Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application's initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to...

• Ensure the code is well commented and well documented; this makes changes easier for the developers.

• Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.

• In the project's initial schedule, allow for some extra time to commensurate with probable changes.

Move new requirements to a 'Phase 2' version of an application and use the original requirements for the 'Phase 1' version.

Negotiate to allow only easily implemented new requirements into the project.

• Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that's their job.

• Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.

• Design some flexibility into automated test scripts;

• Focus initial automated testing on application aspects that are most likely to remain unchanged;

• Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;

• Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;

Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.

Q: How do you know when to stop testing?

A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are...

• Deadlines, e.g. release deadlines, testing deadlines;

• Test cases completed with certain percentage passed;

• Test budget has been depleted;

• Coverage of code, functionality, or requirements reaches a specified point;

• Bug rate falls below a certain level; or

• Beta or alpha testing period ends.

Q: What if the application has functionality that wasn't in the requirements?

A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.

If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.

Q: How can software QA processes be implemented without stifling productivity?

A: Implement QA processes slowly over time. Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures. Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease and there will be improved focus and less wasted effort.

At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process.

However, no one, especially talented technical types, like bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.

Q: What if the organization is growing so fast that fixed QA processes are impossible?

A: This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than...

• Hire good people (i.e. hire Rob Davis)

• Ruthlessly prioritize quality issues and maintain focus on the customer;

• Everyone in the organization should be clear on what quality means to the customer.

Q: Why do you recommend that we test during the design phase?

A: Because testing during the design phase can prevent defects later on. We recommend verifying three things...

1. Verify the design is good, efficient, compact, testable and maintainable.

2. Verify the design meets the requirements and is complete (specifies all relationships between modules, how to pass data, what happens in exceptional circumstances, starting state of each module and how to guarantee the state of each module).

3. Verify the design incorporates enough memory, I/O devices and quick enough runtime for the final product.

Q: What is software quality assurance?

A: Software Quality Assurance, when Rob Davis does it, is oriented to *prevention*. It involves the entire software development process. Prevention is monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with.

Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results.

Rob Davis can provide QA/testing service. This document details some aspects of how he can provide software testing/QA service. For more information, e-mail rob@.

Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual.

Also common are project teams, which include a mix of test engineers, testers and developers, who work closely together, with overall QA processes monitored by project managers.

Software quality assurance depends on what best fits your organization's size and business structure.

Q: How is testing affected by object-oriented designs?

A: A well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well designed this can simplify test design.

Q: What is quality assurance?

A: Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.

Rob Davis' QA service depends on the customers and projects. A lot will depend on team leads or managers, feedback to developers and communications among customers, managers, developers' test engineers and testers.

Q: Process and procedures - why follow them?

A: Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. They also ensure a process is repeatable.

Once Rob Davis has learned and reviewed customer's business processes and procedures, he will follow them. He will also recommend improvements and/or additions.

Q: Standards and templates - what is supposed to be in a document?

A: All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document.

Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.

Q: What are the different levels of testing?

A: Rob Davis has expertise in testing at all testing levels listed below. At each test level, he documents the results.

Each level of testing is either considered black or white box testing.

Q: What is black box testing?

A: Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.

Q: What is white box testing?

A: White box testing is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths and conditions.

Q: What is unit testing?

A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers.

Unit testing is performed after the expected test results are met or differences are explainable/acceptable.

Q: What is functional testing?

A: Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers *should* perform functional testing.

Q: What is usability testing?

A: Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.

Q: What is incremental integration testing?

A: Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application's functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed.

Incremental testing may be performed by programmers, software engineers, or test engineers.

Q: What is parallel/audit testing?

A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

Q: What is integration testing?

A: Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements.

Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.

Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

Q: What is system testing?

A: System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment.

The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed.

System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life.

System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.

Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.

You CAN learn system testing, with little or no outside help. Get CAN get free information. Click on a link!

Q: What is end-to-end testing?

A: Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.

Q: What is regression testing?

A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q: What is sanity testing?

A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.

It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Q: What is performance testing?

A: Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes and response times, as defined by requirements.

Q: What is load testing?

A: Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail.

Q: What is installation testing?

A: Installation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a release is conducted with the objective of demonstrating production readiness.

This test includes the inventory of configuration items, performed by the application's System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.

Q: What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage.

This type of testing usually requires sophisticated testing techniques.

Q: What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Q: What is compatibility testing?

A: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network

This test includes the inventory of configuration items, performed by the application's System Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. When necessary, a sanity test is performed, following installation testing.

Q: What is security/penetration testing?

A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage.

This type of testing usually requires sophisticated testing techniques.

Q: What is recovery/error testing?

A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Q: What is compatibility testing?

A: Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network

Q: What is comparison testing?

A: Comparison testing is testing that compares software weaknesses and strengths to those of competitors' products.

Q: What is acceptance testing?

A: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production.

The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.

Q: What is alpha testing?

A: Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA engineers.

Q: What is beta testing?

A: Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.

Q: What is a Test/QA Team Lead?

A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to management and manages the test team.

Q: What testing roles are standard on most testing projects?

A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager.

Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.

You CAN get a job in testing. Click on a link!

Q: What is a Test Engineer?

A: We, test engineers, are engineers who specialize in testing. We, test engineers, create test cases, procedures, scripts and generate data. We execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing. We also...

• Speed up the work of the development staff;

• Reduce your organization's risk of legal liability;

• Give you the evidence that your software is correct and operates properly;

• Improve problem tracking and reporting;

• Maximize the value of your software;

• Maximize the value of the devices that use it;

• Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;

• Help the work of your development staff, so the development team can devote its time to build up your product;

• Promote continual improvement;

• Provide documentation required by FDA, FAA, other regulatory agencies and your customers;

• Save money by discovering defects 'early' in the design process, before failures occur in production, or in the field;

• Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.

: What is a Test Build Manager?

A: Test Build Managers deliver current software versions to the test environment, install the application's software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware.

Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Test Build Manager.

Q: What is a System Administrator?

A: Test Build Managers, System Administrators, Database Administrators deliver current software versions to the test environment, install the application's software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware.

Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a System Administrator.

Q: What is a Database Administrator?

A: Test Build Managers, System Administrators and Database Administrators deliver current software versions to the test environment, install the application's software and apply software patches, to both the application and the operating system, set-up, maintain and back up test environment hardware. Depending on the project, one person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a Database Administrator.

Q: What is a Technical Analyst?

A: Technical Analysts perform test assessments and validate system/functional test requirements. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Technical Analyst.

Q: What is a Test Configuration Manager?

A: Test Configuration Managers maintain test environments, scripts, software and test data. Depending on the project, one person may wear more than one hat. For instance, Test Engineers may also wear the hat of a Test Configuration Manager.

Q: What is a test schedule?

A: The test schedule is a schedule that identifies all tasks required for a successful testing effort, a schedule of all test activities and resource requirements.

Q: What is software testing methodology?

A: One software testing methodology is the use a three step process of...

1. Creating a test strategy;

2. Creating a test plan/design; and

3. Executing tests.

This methodology can be used and molded to your organization's needs. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his clients' applications.

Q: What is the general testing process?

A: The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.

Q: How do you create a test plan/design?

A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking...

• Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.

• Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.

• It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.

• Test scenarios are executed through the use of test procedures or scripts.

• Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.

• Test procedures or scripts include the specific data that will be used for testing the process or transaction.

• Test procedures or scripts may cover multiple test scenarios.

• Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.

• Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.

• Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.

• A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.

Inputs for this process:

• Approved Test Strategy Document.

• Test tools, or automated test tools, if applicable.

• Previously developed scripts, if applicable.

• Test documentation problems uncovered as a result of testing.

• A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software complexity data.

Outputs for this process:

• Approved documents of test scenarios, test cases, test conditions, and test data.

• Reports of software design issues, given to software developers for correction.

Q: How do you execute tests?

A: Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.

• The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.

• A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool.

• Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.

• After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager's formal acceptance.

• The test team reviews test document problems identified during testing, and update documents where appropriate.

Inputs for this process:

• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.

• Test tools, including automated test tools, if applicable.

• Developed scripts.

• Changes to the design, i.e. Change Request Documents.

• Test data.

• Availability of the test team and project team.

• General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.

• A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.

• Test Readiness Document.

• Document Updates.

Outputs for this process:

• Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.

• Changes to the code, also known as test fixes.

• Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.

• Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.

• Formal record of test incidents, usually part of problem tracking.

• Base-lined package, also known as tested source and object code, ready for migration to the next level.

Q: How do you create a test strategy?

A: The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.

Inputs for this process:

• A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.

• A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.

• Testing methodology. This is based on known standards.

• Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.

• Requirements that the system can not provide, e.g. system limitations.

Outputs for this process:

• An approved and signed off test strategy document, test plan, including test cases.

• Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

Q: What is security clearance?

A: Security clearance is a process of determining your trustworthiness and reliability before granting you access to national security information.

Q: What are the levels of classified access?

A: The levels of classified access are confidential, secret, top secret, and sensitive compartmented information, of which top secret is the highest.

Q: Why do I need clearance?

A: Your need clearance whenever you work in a job where you have a potential to cause damage to national security, or in a job where you can gain access to classified information. You need clearance because of executive order 10450, signed by President Dwight Eisenhower on April 17, 1953. Executive order 10450 gives the government the authority to require clearances of employees who request access to national security or sensitive information.

Q: How do I apply for clearance?

A: Many people think they can go to a company or agency and apply for their own clearances. This is far from the truth. The truth is, first you have to get a cleared job, and then, and only then, if you're successful in getting that cleared job, can you apply for clearance.

For example, XYZ Corporation, a mythical U.S. defense contractor, is awarded a DoD contract to work on a "mission critical" project, and thus XYZ has a specific need for a software QA/test engineer with clearance. If XYZ starts to look for a software QA/test engineer, and if you apply for the job, and if XYZ decides to employ you, then XYZ will sponsor you for clearance.

Q: What is a security clearance investigation?

A: The Defense Security Service (the agency that conducts all security investigations for the DoD) says, "A security clearance investigation is an inquiry into an individual's loyalty, character, trustworthiness and reliability to ensure that he or she is eligible for access to national security information. The investigation focuses on an individual's character and conduct, emphasizing such factors as honesty, trustworthiness, reliability, financial responsibility, criminal activity, emotional stability, and other similar and pertinent areas. All investigations consist of checks of national records and credit checks; some investigations also include interviews with individuals who know the candidate for the clearance as well as the candidate himself/herself."

Q: How long does it take to get my clearance?

A: An interim clearance can be had in as little as two weeks, but your full approval can take months, or even a full year. A typical investigation about you and your lifestyle takes approximately 120 days, sometimes longer, depending on backlog, need for more information, depth of the investigation, and other factors.

Q: How do I apply for clearance?

A: First you have to find the right job, apply for the job, and get a conditional offer of employment. Then you can apply for clearance, by filling out multi-page forms (e.g. federal form SF-86, National Security Questionnaire), giving the government lots of background and identification information on yourself, your relatives, and others. Then you're fingerprinted, interviewed, and then, in due time, investigators working for the government (i.e. DSS) will put you and your lifestyle under a microscope.

Q: How does the government decide who gets the clearance?

A: We don't know for sure, but my educated guess is, if you're a stereotype all-American good guy, a long-time registered Republican and native born U.S. citizen who is well liked by everyone - including girlfriends of your ex-girlfriends - and if all your life you have lived in the same house in Smalltown, USA, and if you have had a clearance before, then, chances are, your clearance will be granted.

Q: Can the government reject my application?

A: Surely they can! We don't know how the government decides, but my educated guess is, if you're not the stereotype all-American good guy the government investigators have in mind, then, chances are, your application for clearance will be rejected.

They can reject you for hundreds of reasons, including sabotage, espionage, treason, terrorism, sedition; for wanting to overthrow the government; for friendship or sympathy with anyone who attempts to commit these crimes; act of force or violence; lack of American citizenship, non-citizen spouses, family members, relatives, cohabitants, or friends; failing to report associations with foreigners; association with suspected foreign spies; financial or business interests in a foreign country; dual citizenship, foreign passport, accepting educational, medical, retirement or welfare benefits from a foreign country, residence in a foreign country, seeking or holding political office in a foreign country, voting in a foreign election, serving the interests of a foreign government, military service in a foreign country; compulsive or addictive behavior, self-destructive or high-risk behavior, personality disorder, any sexual behavior that makes you vulnerable, lack of discretion or judgment; uncooperative attitude, incomplete forms or releases, lack of full, frank and truthful answers, opinions of neighbors, associates, employers, or coworkers; omission, concealment, or falsification of facts; providing false or misleading information, concealment of information, any activity that makes you susceptible to blackmail, dishonesty, violation of rules or written agreements, association with criminals; too much money, too little money, not paying your debts, fraud, embezzlement, theft, income tax evasion, deceptive loan statements, breach of trust, addiction to gambling, drugs, or alcohol; driving while under the influence, fighting, child or spouse abuse, reporting for work drunk, drinking on the job, diagnosis of alcohol dependence, habitual consumption of alcohol, impaired judgment; drug use, possession, cultivation, processing, manufacture, purchase, sale, or distribution, diagnosis of drug abuse or drug dependence, unsuccessful drug treatment program; defect in judgment, reliability, or stability, failure to take prescribed medication; high-risk, irresponsible, aggressive, anti-social or emotionally unstable behavior; allegations of criminal conduct; disclosure of classified information, negligence; job a foreign country, service to a foreign national, foreign intelligence agent or organization; computer crimes, including unauthorized entry, modification, destruction, manipulation, denial of access to information, removal or introduction of hardware, software or media, and illegal downloading of files.

Q: What's the worst thing that can happen to me?

If the government rejects your application for clearance, then you'll have problems. For...

One, you will likely loose your job as soon as your access to classified information is terminated.

Two, you will be prosecuted and face civil and/or criminal charges, if investigators working for the government find evidence, any evidence, that you've done something illegal.

Three, you will have to respond, within 20 days of receiving the government's "letter of denial". You will have to respond in writing, admitting or denying the government's allegations.

Four, if you deny the government's allegations, and ask for a hearing, then you'll have to go to federal court, and at court it's YOU who will be on trial. It's YOU who will have to face a federal judge, federal prosecutor, and their evidences and witnesses.

Five, you might face large expenses. Why? Because your hearing will be conducted like a federal district court bench trial. Therefore you want an attorney on your side, to make an opening statement, closing argument, to do a direct examination of yourself and your other witnesses, and to cross examine government witnesses. And, attorneys usually cost a lot of money!

Six, please keep in mind, "appeals procedures... are very limited. In almost all cases, clearances are never restored."

Q: What if I get my clearance?

A: If you do get your clearance, then, congratulations! Now you are able to keep your job, maintain your access to classified information, and continue your work as a software QA/test engineer on the mission critical project you've been working on at XYZ, in support of the government.

Q: Is my clearance going to be a permanent one?

A: Yes and no. Yes, your clearance is going to be active, either if you continue to work for XYZ, or if you leave XYZ but able to find another employer that is eligible to hold your clearance, offers you a job, and you start working there within two years of leaving XYZ. But, no, clearances are not meant to be permanent; the government can and will reinvestigate you perodically. With or without investigation, they can also send you a "letter of intent to revoke a clearance" at any time.

Q: What if I cannot find an eligible employer?

A: If you cannot find an eligible employer in two years, then you will loose your clearance. If you ever need a clearance again, then you will have to re-apply, and you'll be re-investigated. Re-investigations occur every five years, anyways, if, for example, you hold a top secret clearance.

Q: Why should I apply for clearance?

A: Most of us, software QA/test engineers, don't work for money alone. If we apply, we apply because we respect, love, or trust the government; or because we want to help the government, or because we're attracted to the vision, ideals, and engineering challenges of the job.



What's a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:

* Title

* Identification of software including version/release numbers.

* Revision history of document including authors, dates, approvals.

* Table of Contents.

* Purpose of document, intended audience

* Objective of testing effort

* Software product overview

* Relevant related document list, such as requirements, design documents, other test plans, etc.

* Relevant standards or legal requirements

* Traceability requirements

* Relevant naming conventions and identifier conventions

* Overall software project organization and personnel/contact-info/responsibilties

* Test organization and personnel/contact-info/responsibilities

* Assumptions and dependencies

* Project risk analysis

* Testing priorities and focus

* Scope and limitations of testing

* Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable

* Outline of data input equivalence classes, boundary value analysis, error classes

* Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems

* Test environment validity analysis - differences between the test and production systems and their impact on test validity.

* Test environment setup and configuration issues

* Software migration processes

* Software CM processes

• * Test data setup requirements

* Database setup requirements

* Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs

* Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs

* Test automation - justification and overview

* Test tools to be used, including versions, patches, etc.

* Test script/test code maintenance processes and version control

* Problem tracking and resolution - tools and processes

* Project test metrics to be used

* Reporting requirements and testing deliverables

* Software entrance and exit criteria

* Initial sanity testing period and criteria

* Test suspension and restart criteria

* Personnel allocation

* Personnel pre-training needs

* Test site/location

* Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues.

* Relevant proprietary, classified, security, and licensing issues.

* Open issues

* Appendix - glossary, acronyms, etc.

What's a 'test case'?

* A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

* Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

What should be done after a bug is found?

* The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:

* Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.

* Bug identifier (number, ID, etc.)

* Current bug status (e.g., 'Released for Retest', 'New', etc.)

* The application name or identifier and version

* The function, module, feature, object, screen, etc. where the bug occurred

* Environment specifics, system, platform, relevant hardware specifics

* Test case name/number/identifier

* One-line bug description

* Full bug description

* Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool

* Names and/or descriptions of file/data/messages/etc. used in test

* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem

* Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)

* Was the bug reproducible?

* Tester name

* Test date

* Bug reporting date

* Name of developer/group/organization the problem is assigned to

* Description of problem cause

* Description of fix

* Code section/file/module/class/method that was fixed

* Date of fix

* Application version that contains the fix

* Tester responsible for retest

* Retest date

* Retest results

* Regression testing requirements

* Tester responsible for regression tests

* Regression testing results

* A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

What if the software is so buggy it can't really be tested at all?

* The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:

* Deadlines (release deadlines, testing deadlines, etc.)

* Test cases completed with certain percentage passed

* Test budget depleted

* Coverage of code/functionality/requirements reaches a specified point

* Bug rate falls below a certain level

* Beta or alpha testing period ends

What if there isn't enough time for thorough testing?

* Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:

* Which functionality is most important to the project's intended purpose?

* Which functionality is most visible to the user?

* Which functionality has the largest safety impact?

* Which functionality has the largest financial impact on users?

* Which aspects of the application are most important to the customer?

* Which aspects of the application can be tested early in the development cycle?

* Which parts of the code are most complex, and thus most subject to errors?

* Which parts of the application were developed in rush or panic mode?

* Which aspects of similar/related previous projects caused problems?

* Which aspects of similar/related previous projects had large maintenance expenses?

* Which parts of the requirements and design are unclear or poorly thought out?

* What do the developers think are the highest-risk aspects of the application?

* What kinds of problems would cause the worst publicity?

* What kinds of problems would cause the most customer service complaints?

* What kinds of tests could easily cover multiple functionalities?

* Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn't big enough to justify extensive testing?

* Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.

What can be done if requirements are changing continuously?

A common problem and a major headache

* Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.

* It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.

* If the code is well-commented and well-documented this makes changes easier for the developers.

* Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.

* The project's initial schedule should allow for some extra time commensurate with the possibility of changes.

* Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.

* Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.

* Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.

* Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.

* Try to design some flexibility into automated test scripts.

* Focus initial automated testing on application aspects that are most likely to remain unchanged.

* Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

* Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)

* Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).

• What if the application has functionality that wasn't in the requirements?

* It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.

How can QA processes be implemented without stifling productivity?

* By implementing QA processes slowly over time, using consensus to reach agreement on processes, and adjusting and experimenting as an organization grows and matures, productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process. However, no one - especially talented technical types - likes rules or bureacracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug-fixing and calming of irate customers. (See the Books section's 'Software QA', 'Software Engineering', and 'Project Management' categories for useful books with more information.)

What if an organization is growing so fast that fixed QA processes are impossible

* This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than:

* Hire good people

* Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer

* Everyone in the organization should be clear on what 'quality' means to the customer

How does a client/server environment affect testing?

* Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing. (See the 'Tools' section for web resources with listings that include these kinds of test tools.)

How can World Wide Web sites be tested?

* Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that

• testing for web sites can become a major ongoing effort. Other considerations might include:

How is testing affected by object-oriented designs?

* What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?

* Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?

* What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?

* Will down time for server and content maintenance/upgrades be allowed? how much?

* Will down time for server and content maintenance/upgrades be allowed? how much?

* How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?

* What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?

* Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?

* Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?

* How will internal and external links be validated and updated? how often?

* Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?

* How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?

* How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?

* Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.

* The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.

* Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.

* All pages should have links external to the page; there should be no dead-end pages.

* The page owner, revision date, and a link to a contact person or organization should be included on each page.

What is Extreme Programming and what's it got to do with testing?

* Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme Programming Explained' (See the Books page.). Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first - before the application is developed. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help develope scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected.

Manual (b)

1. What is Acceptance Testing?

Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

2. What is Accessibility Testing?

Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

3. What is Ad Hoc Testing?

A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

4. What is Agile Testing?

Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

5. What is Application Binary Interface (ABI)?

A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.

6. What is Application Programming Interface (API)?

A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

7. What is Automated Software Quality (ASQ)?

The use of software tools, such as automated testing tools, to improve software quality.

8. What is Automated Testing?

Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

9. What is Backus-Naur Form?

A metalanguage used to formally describe the syntax of a language.

10. What is Basic Block?

A sequence of one or more consecutive, executable statements containing no branches.

11. What is Basis Path Testing?

A white box test case design technique that uses the algorithmic flow of the program to design tests.

12. What is Basis Set?

The set of tests derived using basis path testing.

13. What is Baseline?

The point at which some deliverable produced during the software engineering process is put under formal change control.

14. What you will do during the first day of job?

What would you like to do five years from now?

15. What is Beta Testing?

Testing of a rerelease of a software product conducted by customers.

16. What is Binary Portability Testing?

Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

17. What is Black Box Testing?

Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

18. What is Bottom Up Testing?

An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

19. What is Boundary Testing?

Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

20. What is Bug?

A fault in a program, which causes the program to perform in an unintended or unanticipated manner.

20. What is Defect?

If software misses some feature or function from what is there in requirement it is called as defect.

21. What is Boundary Value Analysis?

BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

22. What is Branch Testing?

Testing in which all branches in the program source code are tested at least once.

23. What is Breadth Testing?

A test suite that exercises the full functionality of a product but does not test features in detail.

24. What is CAST?

Computer Aided Software Testing.

25. What is Capture/Replay Tool?

A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

26. What is CMM?

The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

27. What is Cause Effect Graph?

A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

28. What is Code Complete?

Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

29. What is Code Coverage?

An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

30. What is Code Inspection?

A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

31. What is Code Walkthrough?

A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

32. What is Coding?

The generation of source code.

33. What is Compatibility Testing?

Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

34. What is Component?

A minimal software item for which a separate specification is available.

35. What is Component Testing?

Testing of individual software components (Unit Testing).

36. What is Concurrency Testing?

Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

37. What is Conformance Testing?

The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

38. What is Context Driven Testing?

The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

39. What is Conversion Testing?

Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

40. What is Cyclomatic Complexity?

A measure of the logical complexity of an algorithm, used in white-box testing.

41. What is Data Dictionary?

A database that contains definitions of all data items defined during analysis.

42. What is Data Flow Diagram?

A modeling notation that represents a functional decomposition of a system.

43. What is Data Driven Testing?

Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

44. What is Debugging?

The process of finding and removing the causes of software failures.

45. What is Defect?

Nonconformance to requirements or functional / program specification

46. What is Dependency Testing?

Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

47. What is Depth Testing?

A test that exercises a feature of a product in full detail.

48. What is Dynamic Testing?

Testing software through executing it. See also Static Testing.

49. What is Emulator?

A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

50. What is Endurance Testing?

Checks for memory leaks or other problems that may occur with prolonged execution

51. What is End-to-End testing?

Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

52. What is Equivalence Class?

A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

53. What is Equivalence Partitioning?

A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

54. What is Exhaustive Testing?

Testing which covers all combinations of input values and preconditions for an element of the software under test.

55. What is Functional Decomposition?

A technique used during planning, analysis and design; creates a functional hierarchy for the software.

54. What is Functional Specification?

A document that describes in detail the characteristics of the product with regard to its intended features.

55. What is Functional Testing?

Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. or Black Box Testing.

56. What is Glass Box Testing?

A synonym for White Box Testing.

57. What is Gorilla Testing?

Testing one particular module, functionality heavily.

58. What is Gray Box Testing?

A combination of Black Box and White Box testing methodologies? testing a piece of software against its specification but using some knowledge of its internal workings.

59. What is High Order Tests?

Black-box tests conducted once the software has been integrated.

60. What is Independent Test Group (ITG)?

A group of people whose primary responsibility is software testing,

61. What is Inspection?

A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

62. What is Integration Testing?

Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

63. What is Installation Testing?

Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

64. What is Load Testing?

See Performance Testing.

65. What is Localization Testing?

This term refers to making software specifically designed for a specific locality.

66. What is Loop Testing?

A white box testing technique that exercises program loops.

67. What is Metric?

A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

68. What is Monkey Testing?

Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

69. What is Negative Testing?

Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

70. What is Path Testing?

Testing in which all paths in the program source code are tested at least once.

71. What is Performance Testing?

Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

72. What is Positive Testing?

Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

73. What is Quality Assurance?

All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

74. What is Quality Audit?

A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

75. What is Quality Circle?

A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

76. What is Quality Control?

The operational techniques and the activities used to fulfill and verify requirements of quality.

77. What is Quality Management?

That aspect of the overall management function that determines and implements the quality policy.

78. What is Quality Policy?

The overall intentions and direction of an organization as regards quality as formally expressed by top management.

79. What is Quality System?

The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

80. What is Race Condition?

A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

81. What is Ramp Testing?

Continuously raising an input signal until the system breaks down.

82. What is Recovery Testing?

Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions

83. What is Regression Testing?

Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

84. What is Release Candidate?

A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

85. What is Sanity Testing?

Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

86. What is Scalability Testing?

Performance testing focused on ensuring the application under test gracefully handles increases in work load.

87. What is Security Testing?

Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

88. What is Smoke Testing?

A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

89. What is Soak Testing?

Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

90. What is Software Requirements Specification?

A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

91. What is Software Testing?

A set of activities conducted with the intent of finding errors in software.

92. What is Static Analysis?

Analysis of a program carried out without executing the program.

93. What is Static Analyzer?

A tool that carries out static analysis.

94. What is Static Testing?

Analysis of a program carried out without executing the program.

95. What is Storage Testing?

Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

96. What is Stress Testing?

Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

97. What is Structural Testing?

Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.

98. What is System Testing?

Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

99. What is Testability?

The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

100. What is Testing?

The process of exercising software to verify that it satisfies specified requirements and to detect errors. The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. What is Test Automation? It is the same as Automated Testing.

101. What is Test Bed?

An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

102. What is Test Case?

Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test Driven Development? Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

103. What is Test Driver?

A program or test tool used to execute a tests. Also known as a Test Harness.

104. What is Test Environment?

The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

105. What is Test First Design?

Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

106. What is Test Harness?

A program or test tool used to execute a tests. Also known as a Test Driver.

107. What is Test Plan?

A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.

108. What is Test Procedure?

A document providing detailed instructions for the execution of one or more test cases.

109. What is Test Script?

Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

110. What is Test Specification?

A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

111. What is Test Suite?

A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

112. What is Test Tools?

Computer programs used in the testing of a system, a component of the system, or its documentation.

113. What is Thread Testing?

A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

114. What is Top Down Testing?

An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

115. What is Total Quality Management?

A company commitment to develop a process that achieves high quality product and customer satisfaction.

116. What is Traceability Matrix?

A document showing the relationship between Test Requirements and Test Cases.

117. What is Usability Testing?

Testing the ease with which users can learn and use a product.

118. What is Use Case?

The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

119. What is Unit Testing?

Testing of individual software components.

120. how do the companies expect the defect reporting to be communicated by the tester to the development team. Can the excel sheet template be used for defect reporting. If so what are the common fields that are to be included ? who assigns the priority and severity of the defect

To report bugs in excel:

Sno. Module Screen/ Section Issue detail Severity

Prioriety Issuestatus

this is how to report bugs in excel sheet and also set filters on the Columns attributes.

But most of the companies use the share point process of reporting bugs In this when the project came for testing a module wise detail of project is inserted to the defect managment system they are using. It contains following field

1. Date

2. Issue brief

3. Issue discription(used for developer to regenrate the issue)

4. Issue satus( active, resolved, onhold, suspend and not able to regenrate)

5. Assign to (Names of members allocated to project)

6. Prioriety(High, medium and low)

7. Severity (Major, medium and low)

121. How do you plan test automation?

1. Prepare the automation Test plan

2. Identify the scenario

3. Record the scenario

4. Enhance the scripts by inserting check points and Conditional Loops

5. Incorporated Error Hnadler

6. Debug the script

7. Fix the issue

8. Rerun the script and report the result

122. Does automation replace manual testing?

There can be some functionality which cannot be tested in an automated tool so we may have to do it manually. therefore manual testing can never be repleaced. (We can write the scripts for negative testing also but it is hectic task).When we talk about real environment we do negative testing manually.

123. How will you choose a tool for test automation?

choosing of a tool deends on many things ...

1. Application to be tested

2. Test environment

3. Scope and limitation of the tool.

4. Feature of the tool.

5. Cost of the tool.

6. Whether the tool is compatible with your application which means tool should be able to interact with your appliaction

7. Ease of use

124. How you will evaluate the tool for test automation?

We need to concentrate on the features of the tools and how this could be benficial for our project. The additional new features and the enhancements of the features will also help.

125. How you will describe testing activities?

Testing activities start from the elaboration phase. The various testing activities are preparing the test plan, Preparing test cases, Execute the test case, Log teh bug, validate the bug & take appropriate action for the bug, Automate the test cases.

126. What testing activities you may want to automate?

Automate all the high priority test cases which needs to be exceuted as a part of regression testing for each build cycle.

127. Describe common problems of test automation.

The commom problems are:

1. Maintenance of the old script when there is a feature change or enhancement

2. The change in technology of the application will affect the old scripts

128. What types of scripting techniques for test automation do you know?

5 types of scripting techniques:

Linear

Structured

Shared

Data Driven

Key Driven

129. What is memory leaks and buffer overflows ?

Memory leaks means incomplete deallocation - are bugs that happen very often. Buffer overflow means data sent as input to the server that overflows the boundaries of the input area, thus causing the server to misbehave. Buffer overflows can be used.

130. What are the major differences between stress testing,load testing,Volume testing?

Stress testing means increasing the load ,and cheking the performance at each level. Load testing means at a time giving more load by the expectation and cheking the performance at that leval. Volume testing means first we have to apply initial.

Quality Management

1). What is meant by configuration item?

In SDLC any item that needs to be changed in the project is considered as a configuration item. Few of configurable items are program/source code, any JCL’s, it could be even documents related to Quality.

2). What is meant by change request?

After releasing of first version of software, when user need some changes in previous version, then it’s called "change request".

3). What is meant by version control?

To brief about version control, it is a mechanism of managing the versions of files in the project. For example let us take a file. A prepared on a certain date by a person XX and the same file is modified by a person Y on the same date. There comes a question how it can be tracked? This is the place we apply version control tools like VSS, CVS that helps us to keep track of versions. Modified person's name, time, comments.. if the data is lost by any chance version control tool helps us to recover the latest data from the repository

4). Can anybody tell me how can maintain and get IEEE, ISO, PCMM I, CMM, SIXSIGMA levels etc.

Simply ask your management to apply for appropriate certification Once you are certified, say IS0 9001:2000, there will be a set of primary and secondary objectives like Effort variance should not be > 10%, Defect Escape Rat should be less than 7%............................you try to achieve them and based on a periodic evaluation you can get to know whether you are maintaining it or not...

5). Give me an Examples of Non Conformity, Improvement Note, & Preventive Action, OR Case Studies for Quality Assurance Group.

Non-conformity: -

The definition covers the departure or absence of one or more quality characteristics or quality system elements from specified requirements.

6). How can we measure the quality of application, in quality and quantity basis?

Quality Of the Application is depends on the

1) Prevention----- I) User training, Desk help

2) Appraisal --- Testing

3) Review --- Retesting

It all includes in the cost of application it can help us to get the quality of application

7). What is KPA in QA? Can u explain in detail?

Capability maturity model (CMM) has 5 maturity levels.

A) Initial B) Repeatable C) Defined D) Managed and E) Potimizing

Each of the above level except "Initial" is composed of several process areas called "Key Process Area". Again each KPA has 5 sections called common features.

8). Which Level of CMMI implements V Model?

There is no connection of V model with CMMI

9). TQM stands for _______________

A) Team Quality Management

B) Total Quality Management

C) Total Quick Management

D) Total Quality Managers

Ans:

B) Total Quality Management

10). What does the term QCD stand for?

A) Quality, Cost, Delivery

B) Quality, Cause, Delivery

C) Quantity, Cost, Delivery

D) Quality, Cost, Demand

Ans:

B) Quality, Cost, Delivery

11). Quality means conformance to specifications

A) True

B) False

Ans:

B) Industry accepted definitions of quality are “conformance to requirements” (from Philip Crosby) and “fit for use” (from Dr. Joseph Juran and Dr. W. Edwards Deming).

12). Quality means delivering products and services that

A) Meet customer standards

B) meet and fulfill customer needs

C) meet customer expectations

D) All of the above

Ans: D) All of the above

13). According to ISO, Quality is the totality of features and characteristics of a product or service that bears on its ability to ______ a stated or implied need.

A) Meet and/or Exceed

14). As per TQM, the TWO types of Customers for any organization are ___ and ___

Explanation: TQM focuses on TWO types of Customers namely INTERNAL and EXTERNAL Customers.

15). As per TQM, ‘World-Class’ means,

A) World wide presence

B) Best in the world

C) World wide sales

D) World wide web

Explanation: TQM focuses on being the best in the world immaterial of whether it is present world wide or not.

16). Process means ‘What is actually done to create a product or deliver a service’

Explanation: TQM focuses both on Product and Service.

17). Concurrent Engineering is also called as ________...

Concurrent engineering is a business strategy which replaces the traditional product development process with one in which tasks are done in parallel and there is an early consideration for every aspect of a pr oduct's development process. This strategy focuses on the optimization and distribution of a firm's resources in t he design and development process to ensure effective and efficient product development process.

18). What does PDCA mean?

Plan: Plan every activity in a project with proper analysis and considerations for risks and other constraints

Do: Work on the activity as per the plan and complete it

Check: Check the work you have done with Quality Standards and correct the anomalies

Act: Do the root cause analysis and correct the process to prevent the occurrence of errors in future.

19). PDCA Cycle is also called as ____________

A) Deming Wheel

B) Continuous Improvement Cycle

C) Deming cycle

D) All of the above

Ans:

Deming Wheel

20). Prevention Cost is a ______ cost

A) Conformance Cost

B) Internal Failure Cost

C) External Failure Cost

D) none of the above

Ans:

C

Prevention cost is not an external failure cost, it is "the cost of activities specifically designed to prevent poor quality in products or services", and so it should be a conformance cost

21). Cost of Rework is an Internal failure cost

A) True

B) False

Ans:

True

22). Benchmarking is the continuous process of measuring products, services and processes against the toughest competitors or those companies recognized as industry leaders

A) True

B) False

Ans:

True

23). Benchmarking improves Customer satisfaction

A) True

B) False

Ans:

True

24). Deming Wheel is a constantly rotating ________ Cycle

The approach to continuous improvement is best illustrated using the PDCA cycle, which was developed in the 1930s by Dr. Shewhart of the Bell System. The cycle comprises the four steps of Plan, Do, Check, and Act. It is also called the Deming Wheel, and is one of the key concepts of quality.

25). FMEA is _____________

A) First Materials Engineering Analysis

B) Failure Mode and effect analysis

C) Failure Materials Engineering analysis

D) First Mode and Effect Analysis

Ans:

Failure Mode and effect analysis

26). DFMEA is __________

The Answer is Design FMEA

Design Failure Mode and Effective Analysis

27). _______________ Developed Control Charts.

A) Deming

B) Shewhart

C) Pareto

D) Joseph Juran

Ans:

B) Shewhart

28). Cause and Effect diagram is also called as ________________

A) Fish Bone Diagram

B) Ishikawa Diagram

C) Both a & b

D) None of the above

Ans:

C

Both Fish Bone Diagram

Ishikawa Diagram

29) What is a RPN in a FMEA?

A) Rejections Process Number

B) Risk Process Number

C) Risk Priority Number

D) None of the above

Ans:

Risk Priority Number

Bug Tracking

1). What is the difference between Usecase, Test Case, Test Plan?

Use case: Use case is prepared by BA in the Functional Requirement Specification (FRS), which are nothing, but a steps, which are given by the customer.

Test Case: It is prepared by the Test Engineer based on the use cases from FRS to check the functionality of an application thoroughly.

Test Plan: Team Lead prepares Test plan, in it he represents the Scope of the test, What to test and What not to test, Scheduling, What to test using automation etc.

2). How can we design test cases from Requirements?

Do the Requirements represent exact Functionality of AUT?

Yes, Requirements should represent exact functionality of AUT.

First of all you have to analyse the requirement very thoroughly in terms of functionality. Then you have to think about suitable test case design techniques (Black Box design techniques like Equivalence Class Partitioning, Boundary Value Analysis, Error Guessing and Cause Effect Graphing) for writing the test case.

By these concepts you should design a test case, which should have the capability of finding the absence of defects.

4) How to launch the test cases in TestDirector and where it is saved?

You create the Test Cases in the Test plan Tab and link them to the requirements in the requirements Tab. Once the Test Cases are Ready we change the status to ready and go to the "Test Lab" Tab and create a Test Set and add the test cases to the test set and you can run from there.

For Automation...In Test Plan ...create a new automated test and launch the tool and create the script and save it and you can run from the Test lab the same way as you did for the Manual test cases.

To answer your question...the test cases are stored in Test Plan Tab.Or more precisely in the TestDirector, Lets say Quality Center 's database TD is now referred to as Quality Center.

5). What is the difference between a Bug and a Defect?

When tester verifies the test cases, all failed test cases are recorded as bugs directed for necessary action and recorded in defects reports. As a testing point of view all fail test cases are defects as well as found bugs. While development point of view if product doesn't meet the software requirement specification or any other feature that is to be required, it is defect in that system. Who found this feature is not meeting his or her requirement, he or she call it is bug in that product.     

6) How we can explain a bug, which may arrive at the time of testing. Explain that bugs in details

First check the status of the bug, then check the whether the bug is valid or not. Then forward the same bug to the Team Leader and then after confirmation forward it to the concern developer.

7) What do u means by "Reproducing a bug"? If the bug was not reproducible, what is the next step?

Reproducing a bug is as simple as reproducing a bug. If you find a defect...for example I click the button and the corresponding action did not happen.... I will call it a bug...If the developer is unable to see this behavior...he will ask us to reproduce the bug...

Another scenario.... If the client complains a defect in the production...we will have to reproduce it in TEST environment.

8) How can we know bug is reproducible or not?

A bug is reproducible if we can reproduce it. If we cannot reproduce it...it is not reproducible in which case...we will do some further testing around it and If cannot see it...we will close it...and just hope it would never come back ever again....

9) On what basis we give priority and severity for a bug and give one example for high priority and low severity and high severity and low priority?

Always the Priority is given by our T.L Tester will never give the priority ...Some example for

High Severity: H/W Bugs Application crash.

Low Severity: User Interface Bugs.

High Priority: Error message is not coming on time, calculation Bugs etc..

Low Priority: Wrong Alignment, Final Output Wrong.

10). How is tracebility of bug follow?

The tractability of bug can we followed in so many ways.

• Mapping the Functional requirements scenarios (FS Doc) - test cases (ID) -- Failed test cases (Bugs)

• Mapping between requirements (RS Doc) - test case (ID) - Failed Test Case

• Mapping between Test plan (TP Doc)- test case (ID) - Failed Test case

• Mapping between business requirements (BR Doc) - test Case (ID) - Failed Test case

• Mapping between High Level Design (Design doc) - Test Case (ID) - Failed test case.

Usually the tracebility matrix is mapping between the requirements, client requirements, function specification, test plan and test cases.

12) What will be the role of tester if bug is reproduce?

Whenever the bug is reproduced, tester can send it back to the Developer and ask him to fix it again. If the developer cannot fix the bug once again and if the tester sends the bug back to developer, the third time, the tester can make the bug as Deferred i.e. he can reject the build (.exe)

13) Who will Change the status of the Bug...say: Deferred “?

As soon as Tester finds a bug he logs that bug as NEW status and assigns to Developer. Developer working on that bug changes the status as OPEN. When developer feels its not required fixing at that moment he changes the status as DEFFERED. If he completes working on that he changes the status as CLOSE and assigns the report to Tester. Tester retests the bug and confirms the status as CLOSE. We come across many more statuses such as DUPLICATE, NOT REPRODUCIBLE, TO BE FIX, CRITICAL, BLOCKED, NEED CLARIFICATION. We can use the status according to the bug.

14). Is it possible to have a defect with high severity and low priority and vice-versa i.e. high priority and low severity?

When the development team prepared the modules and they sent it for testing and after doing some part of testing then a bug raised in the first module its severity is minor and at the same time in second module a bug raised and its severity is major. We came to know the by the next day the client is coming for inspection and he wanted to get the first module prepared. So at this time less severity bug gets high priority and high severity bug gets less priority.

15). What is Defect Life Cycle in Manual Testing?

Defect Life Cycle:

Defect Detection

Reproduce the Defect

Assign the bug to the Developer

Bug Fixing

Bug Resolution

Regression (Bug Resolution Review)

16). What is the difference between Bug Resolution Meeting and Bug Review Committee? Who are the participants in Bug Resolution Meeting and Bug Review Committee?

Bug resolution meeting: It is conducted in presence of Test lead and Developers where developers gives his/her comment whether to correct the bug at this time or later and test leader accepts or rejects developers comments and even they decide what methods should be chosen to correct the error so that in regression test no bug should be reported and it should not effect other application.

Bug review committee: It is often conducted by test lead, project managers in the presence of client, where they decide as to what errors should be considered as bug and what severity level it should be treated as.

17). How to give BUG Title & BUG Description for ODD Division?

Assumption: ODD number division fails

Bug Tile: Odd number division fails

Bug Description:

1. Open calc window

2. Enter an odd number.

3. Click divide by (/) button.

4. Enter another odd number

5. Hit Enter or click "=" button in calc window

6. Observe the result

Expected Result:

Quotient displayed is correct

Actual Result:

Quotient displayed is incorrect.

18). What is build interval period?

In some companies builds are delivered from development team to the testing team to start the system testing. For example a new product XXX is being released to the testing team so the development team will deliver a build to the Testing team. Now testing team will do the testing and then will release the product to the client. Now there is a new version of the product coming up with the name XXX.1 and is being released to the testing team, so this will be the second build and the time between these 2 builds will be called as build interval. 

19). What is the Difference between End-to-End Testing & System Testing.

System Testing: This is the process of testing end-to-end business functionalities of the application (system) based on client business requirements.

End-to-End Testing: This is the macro end of the testing. This testing mimics the real life situations by interacting with the real time data, network and hardware etc.

In the system testing we are taking sample test data only, where as in the end to end testing we are taking real time data (for a sample) and interacting with network and hardware

22). How would you say that a bug is 100 % fixed???

In quality world you can't say Bug is 100 % fix or 50 % fix. If Bug is fixed than in new version we do regression testing to make sure that Bug fix doesn't have any impact on old functionality.

23). What is the difference between SQA Robot and Rational Robot or does both are same?

SQA robot and Rational Robot are almost same. Only differences is the advanced version of SQA Robot is Rational Robot.

25). What is the difference between Rational clear quest and TestDirector?

Both are Test Management tools. Rational clear quest is from IBM. TestDirector from Mercury. Most of the differences are unknown as features wise they have different features, as rarely any company will be using both tools. One difference is cost wise, two products from different company cannot have same cost.   

26). How to post a BUG.

Bugs are posted with the tools now days, these tools are known as Bug Tracking Tools. Custom designed tools, built specific for company’s bug format, accepts the details of the issue from the testers as follows,

1. Bug ID (Tool generates the ID)

2. Bug Description

3. Steps to reproduce the Bug

4. Hardware & Software environment

5. Status (NEW, RE-OPEN….)

6. Version ID of the Build

7. Assigned to

8. Severity

9. Priority

10. Tester Name & Date of execution

Test Engineers fill the above fields in the tool and the tool acts as a central repository and tracks the entire bug life cycle.

28). What are the different types of Bugs we normally see in any of the Project? Include the severity as well.

|Sl No. |Type of Defects |Severity |

|1 |User Interface Defects |Low |

|2 |Boundary Related Defects |Medium |

|3 |Error Handling Defects |Medium |

|4 |Calculation Defects |High |

|5 |Improper Service Levels (Control flow defects) |High |

|6 |Interpreting Data Defects |High |

|7 |Race Conditions (Compatibility and Intersystem defects) |High |

|8 |Load Conditions (Memory Leakages under load) |High |

|9 |Hardware Failures |High |

|10 |Source bugs (Help Documents) |Low |

29) Some Tips for Bug Tracking

1. A good tester will always try to reduce the steps to the minimal to reproduce the bug; this is extremely helpful for the programmer who has to find the bug

2. Remember that the only person who can close a bug is the person who opened it in the first place. Anyone can resolve it, but only the person who saw the bug can really be sure that what he or she saw is fixed.

3. There are many ways to resolve a bug. Fobbing allows you to resolve a bug as fixed, won't fix, postponed, not reproducible, duplicate, or by design.

4. Not Repro means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the repro steps.

5. You'll want to keep careful track of versions. Every build of the software that you give to testers should have a build ID number so that the poor tester doesn't have to retest the bug on a version of the software where it wasn't even supposed to be fixed.

6. If you're a programmer, and you're having trouble getting testers to use the bug database, just don't accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: "please put this in the bug database. I can't keep track of emails."

7. If you're a tester, and you're having trouble getting programmers to use the bug database, just don't tell them about bugs - put them in the database and let the database email them.

8. If you're a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they'll get the hint.

9. If you're a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bugs. A bug database is also a great "unimplemented feature" database, too.

10. Avoid the temptation to add new fields to the bug database. Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of the file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occurred; keeping track of which exact versions of which DLLs were installed on the machine where the bug happened. It's very important not to give in to these ideas. If you do, your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs "formally" is too much work, people will go around the bug database.

Database Testing

1). What is Database testing?

Testing the backend databases like comparing   the actual results   with expected results.

2). What is database testing and what we test in database testing

Data bas testing basically include the following.

1) Data validity testing.

2) Data Integrity testing

3) Performance related to database.

4) Testing of Procedure, triggers and functions.

For doing data validity testing you should be good in SQL queries

For data integrity testing you should know about referential integrity and different constraint.

For performance related things you should have idea about the table structure and design.

For testing Procedure triggers and functions you should be able to understand the same.

3). What we normally check for in the Database Testing?

Database testing involves some in-depth knowledge of the given application and requires more defined plan of approach to test the data. Key issues include: 

1) data Integrity 

2) data validity 

3) data manipulation and updates. 

 

Tester must be aware of the database design concepts and implementation rules

4). How to Test database in Manually? Explain with an example

Observing that operations, which are operated on front-end is effected on back-end or not. 

The approach is as follows: 

While adding a record thru' front-end check back-end that addition of record is effected or not. 

So same for delete, update... 

 

Ex: Enter employee record in database thru' front-end and check if the record is added or not to the back-end (manually).

5). How to test a SQL Query in WinRunner? With out using Database Checkpoints?

By writing scripting procedure in the TCL we can connect to the database and we can test database and queries.

6) .How does you test whether a database in updated when information is entered in the front end?

With database check point only in WinRunner, but in manual we will go to front end using some information. Will get some session names using that session names we search in backend. If that information is correct then we will see query results.

7). What are the different stages involved in Database Testing

In DB testing we need to check for,

1. The field size validation

2. Check constraints.

3. Indexes are done or not (for performance related issues)

4. Stored procedures.

5.The field size defined in the application is matching with that in the db.

8). What SQL statements have you used in Database Testing?

DDLDDL is Data Definition Language statements. Some examples: · CREATE · ALTER - · DROP -· TRUNCATE -· COMMENT - · RENAME - DMLDML is Data Manipulation Language statements. Some examples: · SELECT - · INSERT - · UPDATE - · DELETE - · MERGE - UPSERT -· CALL - · EXPLAIN PLAN - · LOCK TABLE - DCLDCL is Data Control Language statements. Some examples: · GRANT - · REVOKE - · COMMIT - · SAVEPOINT - · ROLLBACK - COMMIT -· SET TRANSACTION - This are the Database testing commands.

9). How to use SQL queries in WinRunner/QTP

In QTP

Using output database check point and database check point,

Select SQL manual queries option

And enter the "select" queries to retrieve data in the database and compare

 The expected and actual.

10). What steps does a tester take in testing

Stored Procedures?

In my view, the tester has to go through the requirement, as to why the particular stored procedure is written for? And check whether all the required indexes, joins, updates, deletions are correct comparing with the tables mentions in the Stored Procedure.  And also he has to ensure whether the Stored Procedure follows the standard format like comments, updated by, etc.

11). How to check a trigger is fired or not, while doing Database testing?

It can be verified by querying the common audit log where we can able to see the triggers fired.

12). Is an "A fast database retrieval rate" a testable requirement?

Since the requirement seems to be ambiguous. The SRS should clearly mention the performance or transaction requirements i.e. It should say like 'A DB retrieval rate of 5 micro sec'.

13). How to test a DTS package created for data Insert, update and delete? What should be considered in the above case while testing it? What conditions are to be checked if the data is inserted, updated or deleted using a text files?

Data Integrity checks should be performed.  IF the database schema is 3rd normal form, then that should be maintained.  Check to see if any of the constraints have thrown an error.  The most important command will have to be the DELETE command.  That is where things can go really wrong.

Most of all, maintain a backup of the previous database.

14). How to test data loading in Data base testing

Using with Query analyzer.

You have to do the following things while you are involving in Data Load testing.

1. You have know about source data (table(s), columns, data types and Constraints)

2. You have to know about Target data (table(s), columns, data types and Constraints)

3. You have to check the compatibility of Source and Target.

4. You have to Open corresponding DTS package in SQL Enterprise Manager and run the DTS package (If you are using SQL Server).

5. Then you should compare the column's data of Source and Target.

6. You have to check the number to rows of Source and Target.

7. Then you have to update the data in Source and see the change is reflecting in Target or not.

8. You have to check about junk character and Nulls.

15). What is way of writing test cases for database testing?

You have to do the following for writing the database test cases.

1. First of all you have to understand the functional requirement of the application thoroughly.

2. Then you have to find out the back end tables used, joined used between the tables, cursors used (if any), triggers used (if any), stored procedures used (if any), input parameter used and output parameters used for developing that requirement.

3. After knowing all these things you have to write the test cases with different input values for checking all the paths of SP.

One thing writing test cases for backend testing not like functional testing. You have to use white box testing techniques.

 

Web Testing

1). What happens in a web application when you enter all the data and click on submit button suddenly the connection goes off? Where the data will go?

After entering all the data we pressed on submit button,

It depends on when connection goes off means after some time like one or 2 minutes or immediately.

If it is immediately connection goes off then no data will be submitted.

2). What are the important scenarios for testing emails? How do you test emails? Which tool is best for testing email?

Test the email is not a very easy scenario.

We can categorized the different part on which tester may perform the testing.

1) For incoming mail with attachment

2) For outgoing mail with attachment

3) Mail failure

4) Other properties like; Delete, Edit etc....

1) for incoming mail with attachment :

   1. Check the proper incoming address or id.

   2. Check not only the one address (TO), but also for

      The CCC and BCC Addresses.

   3. Check the max and min limit for number of

       Addresses.

   4. Check if address has some error like @ or. Is

       Missing.

   5. Check if address has more than 1 @ sign.

   6. Check if. Has arrives less than 1 or more than

       Standard time (as per registered)

   7. Check if address has arrives more than 1 time.

   8. Check address only have @, _ and - special 

      Symbols that are standard signs.

   9. Check that mail dose not have unnecessary

      Content with it self.

   10. If there is/are any attachments then they open 

        Properly.

   11. The attachment size not exceeds the standard 

       Size.

   12. If there are more than 1 attachments then the

       Calculated size must be under the Standard size.

   13. If email content has some images or some

        Different Flash picture must show properly.

   14. Some time mail has some different extensions

        file then it shows properly.

   15. If the user read the mail, then it should be

        marked as readable.

Other...........

2) For outgoing mail with attachment: --

   1. Check the Address as we done prior.

   2. Here we must check for the attachments.

   3. For all the send pictures

   4. Mail content should have the proper content.

3) Mail failure: -- Check for the mail failure

4) Other properties like; Delete, Edit etc....

3). How to do browser testing, create a standard script and run it for the different browser combinations.

The GUI architecture and events messaging differs from browser to browser

like IE use Win32::OLE messaging and fire fox must be using some GTK based messaging. So it is generally difficult to create one standard script that runs on all browsers.

But tools like WinRunner, QTP must be using complex procedures inside them to handle different browsers.

If application supports different browsers like IE, fire fox, opera, Netscape we will try to do manual testing.

4) .How to Test the Cookies and Memory leakages?

(i.e. does the cookies expired or not and about memory leakage)?

Cookies Testing: ?

Function = edetail&ObjectType = ART&ObjectId = 2935

Memory leakages: By Volume Testing, stability Testing\Reliability Testing.

5). What are the important test scenarios for testing a web site?

As a Tester you should test GUI of the website, test whether the page layout and design elements are consistent through out the site, whether all the links provided in the website are working properly, what are the expected loads on the server? Performance of the website (check for web server response time and database query time) under heavy loads.

6). What type of testing is carried out to find memory- leakages?

Volume testing is testing the application with large volumes of data, or the production data.

  While testing a banking application, we have tested it with the accounts created by us, a sample of 300 accounts. The search option with customer number was working fine with less response time. But when we tested the same search option with large volume of data (2 lacks accounts), the search

Response time is 20 seconds, which is not at all acceptable.

 We have to test the application with large volumes of data to find the performance related issues.

7). From the testability point of view what is the difference between client/server testing and web testing.

Client Server testing is a three tier architecture and when testing has to be done on this we need to consider all types of testing like the stress testing, data - volume testing, load testing and performance testing.

When u is doing a normal web testing then you will be testing navigation testing, frame testing, broken links or missing URL's and static text testing.

8). In n_ tier Architecture What are the factors should be considered for testing?

Basically 3-tier architecture is for windows based application. Where as n_tier architecture is for web based applications. So, We should do the testing related to web testing.

9). During the password field-testing. What should be the focus (give answer in one word)

During password field-testing their r some conditions we have to check

Assume that there is user name and pwd fields r they’re ok then enter username and press tab button or click on the pwd field the cursor should blink in pwd field ok

Whatever the input we’ve given it should be encrypted & should not allow the user to copy it ok if caps lock is on it should message the user that caps lock is on and it also depends on the requirements.

10). What is the difference of approach for Testing Client/server and web applications. (What is the main difference).

Testing web application and client /server application are same but in web application some feature we have to test (like Links, Broken links, URL, static text coverage’s).

11). What is your approach or how do you start testing an Web application

The first thing u need is to go through the specification and without using the specification u are just playing with the application. So specification is the main interface to the for any software to be tested

12). What are the typical problems in web testing?

1:server problems (i.e. server down or under maintains)

2:HardWare problem

3:DataBase problem

13). How to test Browser compatibility testing

Testing the application with multiple browsers (i.e. IE, Netscape navigator,Mozilla Firefox etc) is called browser compatibility testing.

14). What is the difference between testing in client-server applications and web based applications?

Client- server applications: Application is loaded at the server. In every client machine, an exe is loaded to call this application.

Web Based application: Application is loaded at the server. But, No exe is installed at the client machine. We have to call the application through browser.

Client Server Testing:

In client server testing test engineer are conduct the following testing:

1.Behaviour testing (GUI TESTING)

2.Input domain testing

3.Error handling testing

4.Backend testing

In Web testing test engineer are conduct the following testing:

1.Behaviour Testing

2.Static web testing

3.Input domain testing

15). What bugs are mainly come in web testing what severity and priority we are giving

The bug that mainly comes in web testing are cosmetic bugs on web pages , field validation related bugs and also the bugs related to scalability ,throughput and response time for web pages.

In Field validations, especially if you will enter HTML tags in the fields and processed application will crash. Which gives you a HIGH priority and HIGH Severity defect.

Test Case Writing

1). How do we write test cases without documents or knowing the requirements?

We can go to adopt a testing technique called Exploratory Testing. According to James Bach exploratory testing is defined as "an interactive process of concurrent product exploration, test design, and test execution."

2). What are the test cases for one Rupees Coin Box (Telephone box)?

Positive test cases:

TC1: Pick up the Handset

Expected: Should display the message " Insert one rupee coin"  

TC2: Insert the coin 

Expected: Should display the message " Dial the Number"  

TC3: When you get a busy tone, hang-up the receiver

Expected: The inserted one rupee coin comes out of the exit door.

TC4:  Finish off the conversation and hang-up the receiver

Expected: The inserted coin should not come out.

TC5: During the conversation, in case of a local call, (assume the duration is of 60 sec), when 45 as are completed

Expected: It should prompt you to insert another coin to continue by giving beeps.

TC6: In the above scenario, if another coin is inserted

Expected: 60 sec will be added to the counter.

TC7: In the TC5 scenario, if you don't insert one more coin.

Expected: The call gets ended.

TC8: Pick up the receiver. Insert appropriate one rupee coin; Dial the number after hearing the ring tone. Assume it got connected and you are getting the ring tone. Immediately you end up the call.

Expected: The inserted one rupee coin comes out of the exit door.

3). How will you review test cases?

By following a Test Case Review Checklist, as detailed below

|Test Case Preparation checklist: |

|This is used to ensure Test cases have been prepared as per specifications. For all the test responses the test case preparation |

|review checklist test mgr will assess the impact & document it as an issue to concerned parties for resolution. This can be |

|assessed using weekly status reports or emails. |

|  |

|Context |

|Activity |

|Yes |

|No |

|Comments |

| |

|  |

|Is the approved Test Plan available |

|[pic] |

|[pic] |

|  |

| |

|  |

|Are the resources identified to implement test plan |

|[pic] |

|[pic] |

|  |

| |

|  |

|Are the Base line docs available |

|[pic] |

|[pic] |

|  |

| |

|  |

|Is domain knowledge being imparted to the team members who are working on the application |

|[pic] |

|[pic] |

|  |

| |

|  |

|Have Test cases been developed considering all requirements |

|[pic] |

|[pic] |

|  |

| |

|  |

|Have all the +ve as well as –ve test cases been identified |

|[pic] |

|[pic] |

|  |

| |

|  |

|Have all boundary test cases been covered |

|[pic] |

|[pic] |

|  |

| |

|  |

|Have test cases been written for GUI/Hyperlink testing for Web application |

|[pic] |

|[pic] |

|  |

| |

|  |

|Have test cases been written to check Date Integrity |

|[pic] |

|[pic] |

|  |

| |

4) Explain about Use Cases?

In software engineering, a Use Case is,

1. A technique for capturing the potential requirements of a new system or software change.

2. Each use case provides one or more scenarios that convey how the system should interact with the end user or another system to achieve a specific business goal.

3. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert.

4. Use cases are often co-authored by software developers and end users.

 

6) Write test cases for cell phone

Test Cases for Mobile Phone:

1) Check whether Battery is inserted into mobile properly

2) Check Switch on/Switch off of the Mobile

3) Insert the SIM into the phone n check

4) Add one user with name and phone number in Address book

5) Check the Incoming call

6) Check the outgoing call

7) Send/receive messages for that mobile

8) Check all the numbers/Characters on the phone working fine by clicking on them.

9) Remove the user from phone book and check removed properly with name and phone number

10) Check whether Network working fine.

7) Test cases for coffee machine?

1. Plug the power cable and press the on button. The indicator bulb should glow indicating the machine is on.

2. Whether there are three different buttons Red, Blue and Green.

3. Whether Red indicated Coffee.

4. Whether Blue indicated Tea.

5. Whether Green indicated Milk.

6. Whether each button produces the correct out put (Coffee, Tea or Milk).

7. Whether the desired out put is hot or not (Coffee, Tea or Milk).

8. Whether the quantity is exceeding the specified the limit of a cup.

9. Whether the power is off (including the power indicator) when pressed the off button.

10. Verify the Output without Coffee Mix, Milk, Tea Mix in the machine

8) What are Test cases for date field validation? (Third party Calendar controls/date pickers will have a text box attached with a button/icon beside it)

Ans: You can consider the following test cases for a calendar control.

1. Ensure that calendar window is displayed and active, when the calendar is invoked by pressing the calendar icon. (Once we faced an issue, the calendar window is in minimized state when we invoked the calendar.)

 

2. Ensure that calendar date is defaulted to system date.

 

3. Ensure that when a date is selected in the calendar (double click, or some other method), the selected date is displayed in the text box.

They may be many other cases, if the text box is editable or not, purpose of the date field used etc.

9) What can be the possible test cases for the Computer Mouse?

1. To check the mouse company

2. Whether it is a PS/2, USB or serial port mouse or cordless mouse

3. It should be plugged to all the ports of different manufacturers

4. It should be platform independent.

5. Right clicking on the mouse should open the context window.

6. Double clicking on any folder should open up the file

7. Should be able to scroll up and down using the scroll button on the mouse.

8. Should be able to change the functionality of the right and left mouse by changing the settings.

9. Should be able to point to the scrollbar and then drag up and down.

10. Should always point to the right place, where it is intended to point.

10) What are GUI test cases?

GUI test cases are designed to conduct Usability Testing to verify User Friendliness of the given application with respect to look & feel, spell mistakes, the alignment & total objects availability and their access with input devices.

11) What is the difference between positive and negative test cases

Positive Testing = (Not showing error when not supposed to) + (Showing error when supposed to) So if either of the situations in parentheses happens you have a positive test in terms of its result - not what the test was hoping to find. The application did what it was supposed to do. Here user tends to put all positive values according to requirements.

Negative Testing = (Showing error when not supposed to) + (Not showing error when supposed to)(Usually these situations crop up during boundary testing or cause-effect testing.) Here if either of the situations in parentheses happens you have a negative test in terms of its result - again, not what the test was hoping to find. The application did what it was not supposed to do. User tends to put negative values, which may crash the application.

For example in Registration Form, for Name field, user should be allowed to enter only alphabets. Here for Positive Testing, tester will enter only alphabets and application should run properly and should accept only alphabets. For Negative Testing, in the same case user tries to enter numbers, special characters and if the case is executed successfully, negative testing is successful.

14) Writing Testcase on Yahoo Mail Page after login

Testcase1: To verify that when we click mail button whether it list all the compose and check mail etc options or not

Description: click on the mail button

Expected result: Clicking of the mail button lists all the options check mail and compose etc

Testcase2:  To verify that when we click check mail option in the mail list whether it takes you to inbox page or not

Description:  click on check mail option

Expected result: check mail option opens the inbox page

Testcase3: To verify that when you click the inbox whether it displays u r received mails or not

Description: click the inbox button

Expected result: It lists all the mails u received in the inbox

Testcase4: To verify when u click the compose option in the mail button whether it takes u to compose page where u can compose and send mails

Description: click on the compose option in the mail button

Expected result: it takes u to compose page

Testcase5: To verify that after writing message when you click on ‘Send’ whether mail is sent to the address where you specified

Description: give mail id for which you want to send the message in the ‘To’ field and write the message in compose box and click on send button

Expected result: it sending the mail to the mail id which u are given in the TO field

Testcase6: To verify if you give wrong id whether it gives failure notice or not

Description: Give wrong mail id in compose page in the ‘To’ field and see what happens

Expected result: In your inbox one failure notice will come

Like this u can write any no of test cases on yahoo mail page

15) What is defect leakage?

Defect Leakage is also referred to as 'Defect Seepage. Defect Seepage is 'How many defects related to one particular phase is not getting captured in the same phase.

For Example: requirements related defects should be captured in Requirements review. Not in unit testing or system testing.

16) Which of the following statements about generating test cases is false?

(a) Test cases may contain multiple valid test conditions

(b) Test cases may contain multiple invalid test conditions

(c) Test cases may contain both valid and invalid test conditions

(d) Test cases may contain more than one step

(e) Test cases should contain expected results

Ans: No Statement is False

21). What is VSS? Explain?

VSS: VSS stands for Visual Source Safe. It is a configuration management tool. It is a virtual library of computer files.

Users can read any of the files in the library at any time, but in order to change them, they must first check the file out. They are then allowed to modify the file and finally check it back in. Only after they check the file in are their changes made available to other users.

Configuration Management:

Definition #1: The process of identifying and defining the configuration items in a software system, controlling the release, versioning and change of these items though out the software system life cycle, recording and reporting the status of configuration items and change requests, and verifying the completeness and correctness of configuration items.

Definition #2: The tracking and control of software development. SCM systems typically offer version control and team programming features. SCM is an acronym for software configuration management, and relates to configuration management (CM).

Configuration Management Tool: A software product providing automatic support for Change, Configuration or versions control.

22) What is SDLC and briefly discuss the stages in SDLC?

SDLC or software development life cycle is the whole process of developing software, beginning from Requirement gathering to maintenance. Broadly, the different stages of SDLC can be illustrated as these,

Gathering information, Analyze, Designing, Coding, Testing, Implementation and Maintenance.

Where each stage has a well defined procedure so that the developed software meets the customer or clients requirements in the best and most cost effective manner, without erring on the quality of the product

23). Write a test case for telephone?

Test case for telephone: 1. To check connectivity of telephone line or cable 2.To check dial tone of the phone 3. To check the keypads while you dial any valid number on the phone 4. To check ring tone with its volume levels 5.To check voice of both sides (from and to) of the phone 6. To check the display monitor of the phone. 7.To check redial option whether it’s functioning or not 8. To check loudspeaker whether it is functioning or not any missing above then you can add any more test cases

Design the test cases on sending of the emails?

For testing sending an email you can write test cases for.

1) Performance: By using connections from different ISP's i.e. the speed.

2) If your email id is POP compliant, then check if you can sent it using email clients.

3) If you email can be sent using an attachment.

4) Maximum attachment limit.

5) Maximum mail size.

6) Sending to valid / invalid id's if mail is received / bounced back respectively.

What can be the various test cases for a pen?

❖ To check the pen type

❖ To check the pen cap is present or not

❖ To check the pen ink is filled or not

❖ To check the pen writing or not

❖ To check the ink color i.e black or blue

❖ To check the pen color

❖ To check weather the pen is used to write all types of papers or not

❖ To check the ink capacity of the pen

❖ To check the pen product by fiber or plastic

Give test case for withdraw module in banking project

Step1: when the balance in the account is nil, try to withdraw some amount (amount>0) should display msg as " insufficient funds in acc"

Step 2: when the account has some balance amount, try to withdraw amount(amount>balance amount in account), should display "insufficient funds in acc"

Step 3: when the account has some balance amount, enter a amount (amount 0 and should be in multiple of hundreds( varies depending on reqs docs).

In the case of Minimum balance mandatory in the Account:

Step 5: When the account has balance amount, try to withdraw whole amount , should display msg as " Minimum balance should be maintained".

Step 6:  When the account has balance amount=minimum balance, try to withdraw any amount , should display msg as " Minimum balance should be maintained".

How to write test case of Login window where user name is editable to only upto 8 alpha characters?

1 Enter User Name and press LOGIN Button. User Name= COES. Should Display Warning Message Box "Please Enter User name and Password"

2 Enter Password and press LOGIN Button. Password= COES. Should Display Warning Message Box "Please Enter User name and Password”

3 Enter user Name and Password and press LOGIN Button. "USER = COES AND Password = XYZ" (Wrong user name & password). Should Display Warning Message Box "Please Enter User name and Password"

6 Enter user Name and Password and press LOGIN Button. "USER ="" "" AND Password = "" """(Blank values). Should Display Warning Message Box "Please Enter User name and Password"

7 Enter User Name and Password and press LOGIN Button. "USER = COES AND Password = COES" (Correct user name & password). Should navigate to next page.

8 Enter User Name and Password and press LOGIN Button "USER = ADMIN AND Password = ADMIN" (Correct user name & password). Should navigate to Maintenance page.

How will we prepare test cases

Test cases are prepared on the basis of Requirement documents. Each company follows their own format. The test cases are 3 types.

1.GUI Test cases

a. Availability b. Alignment C. Look and Feel d. Spell checking

2.Positive test cases

3.Negative test cases

What is the Traceability matrix? What is its use?

Traceability Matrix is a document that provides cross-reference between Requirements/ Use Cases with Test Cases and Bugs. This document establishes the Traceability between the requirements and test cases executed in the system testing. It also provides a reference to the specific requirement with reference to a particular bug.

Detail on the contents of a Test Case Format as per IEEE 829?

❑ Test case id

❑ Test case name

❑ Feature to be tested

❑ Test suite id

❑ Priority

❑ Test environment

❑ Test effort

❑ Test duration

❑ Precondition

❑ Test procedure:



|Step No. |Actions |Input Reqd. |Expected |Actual |Results |

| | | | | | |

❑ Test case pass or fail criteria 

How will you check that your test cases covered all the requirements

By using, Traceability matrix. Traceability matrix means the matrix showing the relationship b/w the requirements & test cases

For a triangle (sum of two sides is greater than or equal to the third side), what is the minimal number of test cases required?

The answer is 3

1. Measure all sides of the triangle. 

2. Add the minimum and second highest length of the triangle and store the result as Res. 

3. Compare the Res with the largest side of the triangle.

Software Testing Dictionary

(The following definitions are taken from accepted and identified sources)

Acceptance Test. Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal or external) to determine whether or not to accept a system.

Accessibility testing. Testing that determines if software will be usable by people with disabilities.

Ad Hoc Testing. Testing carried out using no recognized test case design technique. [BCS]

Algorithm verification testing. A software development and test phase focused on the validation and tuning of key algorithms using an iterative experimentation process.[Scott Loveland, 2005]

Alpha Testing. Testing of a software product or system conducted at the developer's site by the customer.

Artistic testing. Also known as Exploratory testing.

Assertion Testing. (NBS) A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

Automated Testing. Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.

Audit.

(1) An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria. (IEEE)

(2) To conduct an independent review and examination of system records and activities in order to test the adequacy and effectiveness of data security and data integrity procedures, to ensure compliance with established policy and operational procedures, and to recommend any necessary changes. (ANSI)

ABEND Abnormal END. A mainframe term for a program crash. It is always associated with a failure code, known as an ABEND code.[Scott Loveland, 2005]

Background testing. Is the execution of normal functional testing while the SUT is exercised by a realistic work load. This work load is being processed "in the background" as far as the functional testing is concerned. [ Load Testing Terminology by Scott Stirling ]

Bandwidth testing. Testing a site with a variety of link speeds, both fast (internally connected LAN) and slow (externally, through a proxy or firewall, and over a modem); sometimes called slow link testing if the organization typically tests with a faster link internally (in that case, they are doing a specific pass for the slower line speed only).[Lydia Ash, 2003]

Basis path testing. Identifying tests based on flow and paths of the program or system. [William E. Lewis, 2000]

Basis test set. A set of test cases derived from the code logic, which ensure that 100\% branch coverage is achieved. [BCS]

Bug: glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion, elision. [B. Beizer, 1990], defect, issue, problem

Beta Testing. Testing conducted at one or more customer sites by the end-user of a delivered software product or system.

Benchmarks Programs that provide performance comparison for software, hardware, and systems.

Benchmarking is specific type of performance test with the purpose of determining performance baselines for comparison. [Load Testing Terminology by Scott Stirling ]

Big-bang testing. Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.[BCS]

Black box testing. A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed.

Blink testing. What you do in blink testing is plunge yourself into an ocean of data-- far too much data to comprehend. And then you comprehend it. Don't know how to do that? Yes you do. But you may not realize that you know how.[James Bach's Blog]

Bottom-up Testing. An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher-level components. The process is repeated until the component at the top of the hierarchy is tested. [BCS]

Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation.

Branch Coverage Testing. - Verify each branch has true and false outcomes at least once. [William E. Lewis, 2000]

Breadth test. - A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail [Dorothy Graham, 1999]

BRS - Business Requirement Specification

Capability Maturity Model (CMM). - A description of the stages through which software organizations evolve as they define, implement, measure, control and improve their software processes. The model is a guide for selecting the process improvement strategies by facilitating the determination of current process capabilities and identification of the issues most critical to software quality and process improvement. [SEI/CMU-93-TR-25]

How is Capability Maturity Model organized?

Capture-replay tools. - Tools that gives testers the ability to move some GUI testing away from manual execution by �capturing� mouse clicks and keyboard strokes into scripts, and then �replaying� that script to re-create the same sequence of inputs and responses on subsequent test.[Scott Loveland, 2005]

Cause Effect Graphing. (1) [NBS] Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2)A systematic method of generating test cases representing combinations of conditions. See: testing, functional.[G. Myers]

Clean test. A test whose primary purpose is validation; that is, tests designed to demonstrate the software’s correct working.(syn. positive test)[B. Beizer 1995]

Clear-box testing. See White-box testing.

Code audit. An independent review of source code by a person, team, or tool to verify compliance with software design documentation and programming standards. Correctness and efficiency may also be evaluated. (IEEE)

Code Inspection. A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. [G.Myers/NBS] Syn: Fagan Inspection

Code Walkthrough. A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.[G.Myers/NBS]

Coexistence Testing. Coexistence isn't enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. It�s probably an exponentially hard problem rather than a square-law problem. [from Quality Is Not The Goal. By Boris Beizer, Ph. D.]

Comparison testing. Comparing software strengths and weaknesses to competing products

Compatibility bug A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code. [R. V. Binder, 1999]

Compatibility Testing. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Composability testing -testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks. [Timothy Dyck, 'Easy' and other lies, eWEEK April 28, 2003]

Condition Coverage. A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

Configuration. The functional and/or physical characteristics of hardware/software as set forth in technical documentation and achieved in a product. (MIL-STD-973)

Configuration control. An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. (IEEE)

Conformance directed testing. Testing that seeks to establish conformance to requirements or specification. [R. V. Binder, 1999]

Cookbook scenario. A test scenario description that provides complete, step-by-step details about how the scenario should be performed. It leaves nothing to change. [Scott Loveland, 2005]

Coverage analysis. Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test run. Coverage analysis is useful when attempting to execute each statement, branch, path, or iterative structure in a program. Tools that capture this data and provide reports summarizing relevant information have this feature. (NIST)

CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion. [William E. Lewis, 2000]

Data-Driven testing. An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script. [Daniel J. Mosley, 2002]

Data flow testing. Testing in which test cases are designed based on variable usage within the code.[BCS]

Database testing. Check the integrity of database field values. [William E. Lewis, 2000]

Defect. The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system

Defect. Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures.[Robert M. Poston, 1996.]

Defect. A flaw in the software with potential to cause a failure.. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Age. A measurement that describes the period of time from the introduction of a defect until its discovery. . [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Density. A metric that compares the number of defects to a measure of size (e.g., defects per KLOC). Often used as a measure of defect quality. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Discovery Rate. A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Removal Efficiency (DRE). A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Seeding. The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Defect Masked. An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Depth test. A test case, that exercises some part of a system to a significant level of detail. [Dorothy Graham, 1999]

Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

Design-based testing. Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms). [BCS

Dirty testing Negative testing. [Beizer]

Dynamic testing. Testing, based on specific test cases, by execution of the test object or running programs [Tim Koomen, 1999]

End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

Error: An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests.[Robert M. Poston, 1996.]

Errors: The amount by which a result is incorrect. Mistakes are usually a result of a human action. Human mistakes (errors) often result in faults contained in the source code, specification, documentation, or other product deliverable. Once a fault is encountered, the end result will be a program failure. The failure usually has some margin of error, either high, medium, or low.

Error Guessing: Another common approach to black-box validation. Black box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.

Error guessing. A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them [from BS7925-1]

Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program. [R. V. Binder, 1999]

Exception Testing. Identify error messages and exception handling processes an conditions that trigger them. [William E. Lewis, 2000]

Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test. [James Bach]

Failure: A failure is a deviation from expectations exhibited by software and observed as a set of symptoms by a tester or user. A failure is caused by one or more defects. The Causal Trail. A person makes an error that causes a defect that causes a failure.[Robert M. Poston, 1996]

Fix testing. Rerunning of a test that previously found the bug in order to see if a supplied fix works. [Scott Loveland, 2005]

Follow-up testing, we vary a test that yielded a less-than spectacular failure. We vary the operation, data, or environment, asking whether the underlying fault in the code can yield a more serious failure or a failure under a broader range of circumstances.[Measuring the Effectiveness of Software Testers,Cem Kaner, STAR East 2003]

Formal Testing. (IEEE) Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

Framework scenario. A test scenario definition that provides only enough high-level information to remind the tester of everything that needs to be covered for that scenario. The description captures the activity essence, but trusts the tester to work through the specific steps required.[Scott Loveland, 2005]

Free Form Testing. Ad hoc or brainstorming using intuition to define test cases. [William E. Lewis, 2000]

Functional Decomposition Approach. An automation method in which the test cases are reduced to fundamental tasks, navigation, functional tests, data verification, and return navigation; also known as Framework Driven Approach. [Daniel J. Mosley, 2002]

Functional testing Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.

Function verification test (FVT). Testing of a complete, yet containable functional area or component within the overall software package. Normally occurs immediately after Unit test. Also known as Integration test. [Scott Loveland, 2005]

Gray box testing. Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester.[Cem Kaner]

Gray box testing. Test designed based on the knowledge of algorithm, internal states, architectures, or other high -level descriptions of the program behavior. [Doug Hoffman]

Gray box testing. Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are:

A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.

The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.

[Elfriede Dustin. "Quality Web Systems: Performance, Security & Usability."]

Grooved Tests. Tests that simply repeat the same activity against a target product from cycle to cycle. [Scott Loveland, 2005]

Heuristic Testing: An approach to test design that employs heuristics to enable rapid development of test cases.[James Bach]

High-level tests. These tests involve testing whole, complete products [Kit, 1995]

HTML validation testing. Specific to Web testing. This certifies that the HTML meets specifications and internal coding standards.

W3C Markup Validation Service, a free service that checks Web documents in formats like HTML and XHTML for conformance to W3C Recommendations and other standards.

Incremental integration testing. Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).

Integration. The process of combining software components or hardware components or both into overall system.

Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Integration Testing. Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) this method is contrary to the big-bang approach where all source modules are combined and tested in one step. The big-bang approach to integration should be discouraged.

Interface Tests. Programs that provide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.

Internationalization testing (I18N) - testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth. [Clinton De Young, 2003].

Interoperability Testing which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.

Inter-operability Testing. True inter-operability testing concerns testing for unforeseen interactions with other packages with which your software has no direct connection. In some quarters, inter-operability testing labor equals all other testing combined. This is the kind of testing that I say shouldn’t be done because it cant be done.[from Quality Is Not The Goal. By Boris Beizer, Ph. D.]

Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94].

Install/uninstall testing. Testing of full, partial, or upgrade install/uninstall processes.

Key Word-Driven Testing. The approach developed by Carl Nagle of the SAS Institute that is offered as freeware on the Web; Key Word-Driven Testing is an enhancement to the data-driven methodology. [Daniel J. Mosley, 2002]

Latent bug A bug that has been dormant (unobserved) in two or more releases. [R. V. Binder, 1999]

Lateral testing. A test design technique based on lateral thinking principals, to identify faults. [Dorothy Graham, 1999]

Limits testing. See Boundary Condition testing.

Load testing. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Load stress test. A test is design to determine how heavy a load the application can handle.

Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.

Load isolation test. The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.

Longevity testing. See Reliability testing.

Long-haul Testing. See Reliability testing.

Master Test Planning. An activity undertaken to orchestrate the testing effort across levels and organizations.[Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Memory leak testing. Testing the server components to see if memory is not properly referenced and released, which can lead to instability and the product's crashing.

Model-Based Testing. Model-based testing takes the application and models it so that each state of each input, output, form, and function is represented. Since this is based on detailing the various states of objects and data, this type of testing is very similar to charting out states. Many times a tool is used to automatically go through all the states in the model and try different inputs in each to ensure that they all interact correctly.[Lydia Ash, 2003]

Monkey Testing. (Smart monkey testing) Input are generated from probability distributions that reflect actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distributions is taken into account. In all branches of smart monkey testing, the input is considered as a single event.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

Monkey Testing. (Brilliant monkey testing) The inputs are created from a stochastic regular expression or stochastic finite-state machine model of user behavior. That is, not only are the values determined by probability distributions, but the sequence of values and the sequence of states in which the input provider goes is driven by specified probabilities.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

Monkey Testing. (Dumb-monkey testing) Inputs are generated from a uniform probability distribution without regard to the actual usage statistics.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

Maximum Simultaneous Connection testing. This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.

Migration Testing. Testing to see if the customer will be able to transition smoothly from a prior version of the software to a new one. [Scott Loveland, 2005]

Mutation testing. A testing strategy where small variations to a program are inserted (a mutant), followed by execution of an existing test suite. If the test suite detects the mutant, the mutant is 'retired.' If undetected, the test suite must be revised. [R. V. Binder, 1999]

Multiple Condition Coverage. A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.[G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.

Negative test. A test whose primary purpose is falsification; that is tests designed to break the software [B.Beizer1995]

Noncritical code analysis. Examines software elements that are not designated safety-critical and ensures that these elements do not cause a hazard. (IEEE)

Orthogonal array testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987

Orthogonal array testing: Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000]

Oracle. Test Oracle: a mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test [from BS7925-1]

Parallel Testing. Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run.[ISO]

Penetration testing. The process of attacking a host from outside to ascertain remote security vulnerabilities.

Performance Testing. Testing conducted to evaluate the compliance of a system or component with specific performance requirements [BS7925-1]

Performance testing can be undertaken to: 1) show that the system meets specified performance objectives, 2) tune the system, 3) determine the factors in hardware or software that limit the system's performance, and 4) project the system's future load- handling capacity in order to schedule its replacements" [Software System Testing and Quality Assurance. Beizer, 1984, p. 256]

Postmortem. Self-analysis of interim or fully completed testing activities with the goal of creating improvements to be used in future.[Scott Loveland, 2005]

Preventive Testing Building test cases based upon the requirements specification prior to the creation of the code, with the express purpose of validating the requirements [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

Prior Defect History Testing. Test cases are created or rerun for every defect found in prior tests of the system. [William E. Lewis, 2000]

Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: acceptance testing.

Quality. The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.

Quality Assurance (QA) Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).

Quality Control (QC) Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.

Our definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test

Race condition defect. Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.

Recovery testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Regression Testing. Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.

Regression Testing. - Testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program [Glenford J.Myers, 1979]

Reengineering. The process of examining and altering an existing system to reconstitute it in a new form. May include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).

Reference testing. A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution. [Dorothy Graham, 1999]

Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.

Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of time t, the probability that the object will not fail within time t.

Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in -- the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed -- the software will appear to work properly. [Professor Dick Hamlet. Ph.D.]

Range Testing. For each input identifies the range over which the system behavior should be the same. [William E. Lewis, 2000]

Risk-Based Testing: Any testing organized to explore specific product risks.[James Bach website]

Risk management. An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.

Robust test. A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails. [Dorothy Graham, 1999]

Sanity Testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time. [Load Testing Terminology by Scott Stirling ]

Scenario-Based Testing. Scenario-based testing is one way to document the software specifications and requirements for a project. Scenario-based testing takes each user scenario and develops tests that verify that a given scenario works. Scenarios focus on the main goals and requirements. If the scenario is able to flow from the beginning to the end, then it passes.[Lydia Ash, 2003]

(SDLC) System Development Life Cycle - a phases used to develop, maintain, and replace information systems. Typical phases in the SDLC are: Initiation Phase, Planning Phase, Functional Design Phase, System Design Phase, Development Phase, Integration and Testing Phase, Installation and Acceptance Phase, and Maintenance Phase.

The V-model talks about SDLC (System Development Life Cycle) phases and maps them to various test levels

Security Audit. An examination (often by third parties) of a server's security controls and may be disaster recovery mechanisms.

Sensitive test. A test that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test. [Dorothy Graham, 1999]

Server log testing. Examining the server logs after particular actions or at regular intervals to determine if there are problems or errors generated or if the server is entering a faulty state.

Service test. Test software fixes, both individually and bundled together, for software that is already in use by customers. [Scott Loveland, 2005]

Skim Testing A testing technique used to determine the fitness of a new build or release of an AUT to undergo further, more thorough testing. In essence, a "pretest" activity that could form one of the acceptance criteria for receiving the AUT for testing [Testing IT: An Off-the-Shelf Software Testing Process by John Watkins]

Smoke test describes an initial set of tests that determine if a new version of application performs well enough for further testing.[Louise Tamres, 2002]

Sniff test. A quick check to see if any major abnormalities are evident in the software.[Scott Loveland, 2005 ]

Specification-based test. A test, whose inputs are derived from a specification.

Spike testing. to test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test.[ Load Testing Terminology by Scott Stirling ]

Standards This page lists many standards that can be related to software testing

STEP (Systematic Test and Evaluation Process) Software Quality Engineering's copyrighted testing methodology.

Stability testing. Testing the ability of the software to continue to function, over time and over its full range of use, without failing or causing failure. (see also Reliability testing)

State-based testing: Testing with test cases developed by modeling the system under test as a state machine [R. V. Binder, 1999]

State Transition Testing. Technique in which the states of a system are fist identified and then test cases are written to test the triggers to cause a transition from one condition to another state. [William E. Lewis, 2000]

Static testing. Source code analysis. Analysis of source code to expose potential defects.

Statistical testing. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases. [BCS]

Stealth bug. A bug that removes information useful for its diagnosis and correction. [R. V. Binder, 1999]

Storage test. Study how memory and space is used by the program, either in resident memory or on disk. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them. [Cem Kaner, 1999, p55]

Streamable Test cases. Test cases which are able to run together as part of a large group. [Scott Loveland, 2005]

Stress / Load / Volume test. Tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.

Stress Test. A stress test is designed to determine how heavy a load the Web application can handle. A huge load is generated as quickly as possible in order to stress the application to its limit. The time between transactions is minimized in order to intensify the load on the application, and the time the users would need for interacting with their Web browsers is ignored. A stress test helps determine, for example, the maximum number of requests a Web application can handle in a specific period of time, and at what point the application will overload and break down.[Load Testing by S. Asbock]

Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.

System testing Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

System verification test. (SVT). Testing of an entire software package for the first time, with all components working together to deliver the project's intended purpose on supported hardware platforms. [Scott Loveland, 2005]

Table testing. Test access, security, and data integrity of table entries. [William E. Lewis, 2000]

Test Artifact Set. Captures and presents information related to the tests performed.

Test Bed. An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test [IEEE 610].

Test Case. A set of test inputs, executions, and expected results developed for a particular objective.

Test conditions. The set of circumstances that a test invokes. [Daniel J. Mosley, 2002]

Test Coverage The degree to which a given test or set of tests addresses all specified test cases for a given system or component.

Test Criteria. Decision rules used to determine whether software item or software feature passes or fails a test.

Test data. The actual (sets of) values used in the test or that are necessary to execute the test. Test data instantiates the condition being tested (as input or as pre-existing data) and is used to verify that a specific requirement has been successfully implemented (comparing actual results to the expected results). [Daniel J. Mosley, 2002]

Test Documentation. (IEEE) Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.

Test Driver A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.

Test-driven development (TDD) Is an evolutionary approach to development which combines test-first development where you write a test before you write just enough production code to fulfill that test and refactoring.[Beck 2003; Astels 2003]

Test Harness A system of test drivers and other tools to support test execution (e.g., stubs, executable test cases, and test drivers). See: test driver.

Test Inputs. Artifacts from work processes that are used to identify and define actions that occur during testing. These artifacts may come from development processes that are external to the test group. Examples include Functional Requirements Specifications and Design Specifications. They may also be derived from previous testing phases and passed to subsequent testing activities.[Daniel J. Mosley, 2002]

Test Idea: an idea for testing something.[James Bach]

Test Item. A software item which is the object of testing.[IEEE]

Test Log A chronological record of all relevant details about the execution of a test.[IEEE]

Test logistics: the set of ideas that guide the application of resources to fulfilling the test strategy.[James Bach]

Test Plan. A high-level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements

Test Procedure. A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] Often called - a manual test script.

Test Results. Data captured during the execution of test and used in calcu- lating the different key measures of testing.[Daniel J. Mosley, 2002]

Test Results. Data captured during the execution of test and used in calcu- lating the different key measures of testing.[Daniel J. Mosley, 2002]

Test Rig A flexible combination of hardware, software, data, and interconnectivity that can be configured by the Test Team to simulate a variety of different Live Environments on which an AUT can be delivered.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]

Test Script. The computer readable instructions that automate the execu- tion of a test procedure (or portion of a test procedure). Test scripts may be created (recorded) or automatically generated using test automation tools, programmed using a programming language, or created by a combination of recording, generating, and programming.[Daniel J. Mosley, 2002]

Test strategy. Describes the general approach and objectives of the test activities. [Daniel J. Mosley, 2002]

Test Status. The assessment of the result of running tests on software.

Test Stub A dummy software component or object used (during development and testing) to simulate the behavior of a real component. The stub typically provides test output.

Test Suites A test suite consists of multiple test cases (procedures and data) that are combined and often managed by a test harness.

Test technique: test method; a heuristic or algorithm for designing and/or executing a test; a recipe for a test. [James Bach]

Test Tree. A physical implementation of Test Suite. [Dorothy Graham, 1999]

Testability. Attributes of software that bear on the effort needed for validating the modified software [ISO 8402]

Testability Hooks. Those functions, integrated in the software that can be invoked through primarily undocumented interfaces to drive specific processing which would otherwise be difficult to exercise. [Scott Loveland, 2005]

Testing. The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specification.

(TPI) Test Process Improvement. A method for base lining testing processes and identifying process improvement opportunities, using a static model developed by Martin Pol and Tim Koomen.

Test Suite. The set of tests that when executed instantiate a test scenario.[Daniel J. Mosley, 2002]

Test Workspace. Private areas where testers can install and test code in accordance with the project's adopted standards in relative isolation from the developers.[Daniel J. Mosley, 2002]

Thread Testing. A testing technique used to test the business functionality or business logic of the AUT in an end-to-end manner, in much the same way a User or an operator might interact with the system during its normal use.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]

Timing and Serialization Problems. A class of software defect, usually in multithreaded code, in which two or more tasks attempt to alter a shared software resource without properly coordinating their actions. Also known as Race Conditions.[Scott Loveland, 2005]

Translation testing. See internationalization testing.

Thrasher. A type of program used to test for data integrity errors on mainframe system. The name is derived from the first such program, which deliberately generated memory thrashing (the overuse of large amount of memory, leading to heavy paging or swapping) while monitoring for corruption. [Scott Loveland, 2005]

Unit Testing. Testing performed to isolate and expose faults and failures as soon as the source code is available, regardless of the external interfaces that may be required. Oftentimes, the detailed design and requirements documents are used as a basis to compare how and what the unit is able to perform. White and black-box testing methods are combined during unit testing.

Usability testing. Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer.

Validation. The comparison between the actual characteristics of something (e.g. a product of a software project and the expected characteristics). Validation is checking that you have built the right system.

Variance. A variance is an observable and measurable difference between an actual result and an expected result.

Verification The comparison between the actual characteristics of something (e.g. a product of a software project) and the specified characteristics.Verification is checking that we have built the system right.

Volume testing. Testing where the system is subjected to large volumes of data.[BS7925-1]

Walkthrough In the most usual form of term, a walkthrough is step by step simulation of the execution of a procedure, as when walking through code line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.

White Box Testing (glass-box). Testing is done under a structural testing strategy and require complete access to the object's structure that is, the source code.[B. Beizer, 1995 p8],

Standards

1 IEEE Standards for Software Testing

IEEE 829-1998, also known as the 829 Standard for Software Test Documentation is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgment outside the purview of the standard. The documents are:

• Test Plan: a management planning document that shows:

• How the testing will be done

• Who will do it

• What will be tested

• How long it will take

• What the test coverage will be, i.e. what quality level is required

• Test Design Specification: detailing test conditions and the expected results as well as test pass criteria.

• Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification

• Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed

• Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next

• Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed

• Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact upon testing of an incident.

• Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.

2 Relationship with other standards

Other standards that may be referred to when documenting according to IEEE 829 include:

• IEEE 1008, a standard for unit testing

• IEEE 1012, a standard for Software Verification and Validation

• IEEE 1028, a standard for software inspections

• IEEE 1044, a standard for the classification of software anomalies

• IEEE 1044-1, a guide to the classification of software anomalies

• IEEE 1233, a guide for developing system requirements specifications

• IEEE 730, a standard for software quality assurance plans

• IEEE 1061, a standard for software quality metrics and methodology

• BSS 7925-1, a vocabulary of terms used in software testing

BSS 7925-2, a standard for software component testing.

C) CMM Levels

Capability Maturity Model (CMM) broadly refers to a process improvement approach that is based on a process model. CMM also refers specifically to the first such model, developed by the Software Engineering Institute (SEI) in the mid-1980s, as well as the family of process models that followed. A process model is a structured collection of practices that describe the characteristics of effective processes; the practices included are those proven by experience to be effective. [1]

CMM can be used to assess an organization against a scale of five process maturity levels. Each level ranks the organization according to its standardization of processes in the subject area being assessed. The subject areas can be as diverse as software engineering, systems engineering, project management, risk management, system acquisition, information technology (IT) services and personnel management.

CMM was developed by the SEI at Carnegie Mellon University in Pittsburgh. It has been used extensively for avionics software and government projects, in North America, Europe, Asia, Australia, South America, and Africa. [2] Currently, some government departments require software development contract organization to achieve and operate at a level 3 standard.

3 Maturity model

The Capability Maturity Model (CMM) is a way to develop and refine an organization's processes. The first CMM was for the purpose of developing and refining software development processes. A maturity model is a structured collection of elements that describe characteristics of effective processes. A maturity model provides:

• a place to start

• the benefit of a community’s prior experiences

• a common language and a shared vision

• a framework for prioritizing actions

• a way to define what improvement means for your organization

A maturity model can be used as a benchmark for assessing different organizations for equivalent comparison. It describes the maturity of the company based upon the project the company is dealing with and the clients.

Levels of the CMM

1 Initial

At maturity level 1, processes are usually ad hoc and the organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes. In spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce products and services that work; however, they frequently exceed the budget and schedule of their projects.

Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes again.

Level 1 software project success depends on having quality people.

2 Level 2 - Repeatable

At maturity level 2, software development successes are repeatable. The processes may not repeat for all the projects in the organization. The organization may use some basic project management to track cost and schedule.

Process discipline helps ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans.

Project status and the delivery of services are visible to management at defined points (for example, at major milestones and at the completion of major tasks).

Basic project management processes are established to track cost, schedule, and functionality. The minimum process discipline is in place to repeat earlier successes on projects with similar applications and scope. There is still a significant risk of exceeding cost and time estimate.

3 Level 3 - Defined

The organization’s set of standard processes, which is the basis for level 3, is established and improved over time. These standard processes are used to establish consistency across the organization. Projects establish their defined processes by the organization’s set of standard processes according to tailoring guidelines.

The organization’s management establishes process objectives based on the organization’s set of standard processes and ensures that these objectives are appropriately addressed.

A critical distinction between level 2 and level 3 is the scope of standards, process descriptions, and procedures. At level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organisational unit.

4 Level 4 - Managed

Using precise measurements, management can effectively control the software development effort. In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. At this level organization set a quantitative quality goal for both software process and software maintenance.

Subprocesses are selected that significantly contribute to overall process performance. These selected subprocesses are controlled using statistical and other quantitative techniques.

A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable.

5 Level 5 - Optimizing

Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements. Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement. The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Both the defined processes and the organization’s set of standard processes are targets of measurable improvement activities.

Process improvements to address common causes of process variation and measurably improve the organization’s processes are identified, evaluated, and deployed.

Optimizing processes that are nimble, adaptable and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization. The organization’s ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning.

A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical probability) to achieve the established quantitative process-improvement objectives.

The CMMI contains several key process areas indicating the aspects of product development that are to be covered by company processes.

|Key Process Areas of the Capability Maturity Model Integration (CMMI) |

|Abbreviation |Name |Area |Maturity Level |

|CAR |Causal Analysis and Resolution |Support |5 |

|CM |Configuration Management |Support |2 |

|DAR |Decision Analysis and Resolution |Support |3 |

|IPM |Integrated Project Management |Project Management |3 |

|ISM |Integrated Supplier Management |Project Management |3 |

|IT |Integrated Teaming |Project Management |3 |

|MA |Measurement and Analysis |Support |2 |

|OEI |Organizational Environment for Integration |Support |3 |

|OID |Organizational Innovation and Deployment |Process Management |5 |

|OPD |Organizational Process Definition |Process Management |3 |

|OPF |Organizational Process Focus |Process Management |3 |

|OPP |Organizational Process Performance |Process Management |4 |

|OT |Organizational Training |Process Management |3 |

|PI |Product Integration |Engineering |3 |

|PMC |Project Monitoring and Control |Project Management |2 |

|PP |Project Planning |Project Management |2 |

|PPQA |Process and Product Quality Assurance |Support |2 |

|QPM |Quantitative Project Management |Project Management |4 |

|RD |Requirements Development |Engineering |3 |

|REQM |Requirements Management |Engineering |2 |

|RSKM |Risk Management |Project Management |3 |

|SAM |Supplier Agreement Management |Project Management |2 |

|TS |Technical Solution |Engineering |3 |

|VAL |Validation |Engineering |3 |

|VER |Verification |Engineering |3 |

A) Metrics for evaluating application system Testing

Test Coverage = Number of units (KLOC/FP) tested / total size of the system

Number of tests per unit size = Number of test cases per KLOC/FP

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual

Effort for testing

B) The product quality measures

1. Customer satisfaction index

(Quality ultimately is measured in terms of customer satisfaction.)

Surveyed before product delivery and after product delivery

(and on-going on a periodic basis, using standard questionnaires)

Number of system enhancement requests per year

Number of maintenance fix requests per year

User friendliness: call volume to customer service hotline

User friendliness: training time per new user

Number of product recalls or fix releases (software vendors)

Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities

Normalized per function point (or per LOC)

At product delivery (first 3 months or first year of operation)

Ongoing (per year of operation)

By level of severity

By category or cause, e.g.: requirements defect, design defect, code defect,

documentation/on-line help defect, defect introduced by fixes, etc.

2. 3. Responsiveness (turnaround time) to users

Turnaround time for defect fixes, by level of severity

Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility

Ratio of maintenance fixes (to repair the system & bring it into

compliance with specifications), vs. enhancement requests

(requests by users to enhance or change functionality)

5. Defect ratios

Defects found after product delivery per function point

Defects found after product delivery per LOC

Pre-delivery defects: annual post-delivery defects

Defects per function point of the system modifications

6. Defect removal efficiency

Number of post-release defects (found by clients in field operation),

categorized by level of severity

Ratio of defects found internally prior to release (via inspections and testing),

as a percentage of all defects

All defects include defects found internally plus externally (by

customers) in the first year after product delivery

7. Complexity of delivered product

McCabe's cyclomatic complexity counts across the system

Halstead’s measure

Card's design complexity measures

Predicted defects and maintenance costs, based on complexity measures

8. Test coverage

Breadth of functional coverage

Percentage of paths, branches or conditions that were actually tested

Percentage by criticality level: perceived level of risk of paths

The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects

Business losses per defect that occurs during operation

Business interruption costs; costs of work-arounds

Lost sales and lost goodwill

Litigation costs resulting from defects

Annual maintenance cost (per function point)

Annual operating cost (per function point)

Measurable damage to your boss's career

10. Costs of quality activities

Costs of reviews, inspections and preventive measures

Costs of test planning and preparation

Costs of test execution, defect tracking, version and change control

Costs of diagnostics, debugging and fixing

Costs of tools and tool support

Costs of test case library maintenance

Costs of testing & QA education associated with the product

Costs of monitoring and oversight by the QA organization

(if separate from the development and test organizations)

11. Re-work

Re-work effort (hours, as a percentage of the original coding hours)

Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)

Re-worked software components (as a percentage of the total delivered components)

12. Reliability

Availability (percentage of time a system is available, versus the time

the system is needed to be available)

Mean time between failure (MTBF)

Mean time to repair (MTTR)

Reliability ratio (MTBF / MTTR)

Number of product recalls or fix releases

Number of production re-runs as a ratio of production runs

SQL Concepts

Basics of the SELECT Statement

In a relational database, data is stored in tables. An example table would relate Social Security Number, Name, and Address:

|EmployeeAddressTable |

|SSN |FirstName |LastName |Address |City |State |

|512687458 |Joe |Smith  |83 First Street |Howard |Ohio  |

|758420012 |Mary |Scott  |842 Vine Ave. |Losantiville |Ohio  |

|102254896 |Sam |Jones  |33 Elm St. |Paris |New York  |

|876512563 |Sarah |Ackerman  |440 U.S. 110 |Upton |Michigan  |

Now, let's say you want to see the address of each employee. Use the SELECT statement, like so:

SELECT FirstName, LastName, Address, City, State

FROM EmployeeAddressTable;

The following is the results of your query of the database:

|First Name |Last Name |Address |City |State |

|Joe |Smith |83 First Street  |Howard |Ohio |

|Mary |Scott |842 Vine Ave.  |Losantiville |Ohio |

|Sam |Jones |33 Elm St.  |Paris |New York |

|Sarah |Ackerman |440 U.S. 110  |Upton |Michigan |

To explain what you just did, you asked for the all of data in the EmployeeAddressTable, and specifically, you asked for the columns called FirstName, LastName, Address, City, and State. Note that column names and table names do not have spaces...they must be typed as one word; and that the statement ends with a semicolon (;). The general form for a SELECT statement, retrieving all of the rows in the table is:

SELECT ColumnName, ColumnName, ...

FROM TableName;

To get all columns of a table without typing all column names, use:

SELECT * FROM TableName;

Each database management system (DBMS) and database software has different methods for logging in to the database and entering SQL commands; see the local computer "guru" to help you get onto the system, so that you can use SQL.

Conditional Selection

To further discuss the SELECT statement, let's look at a new example table (for hypothetical purposes only):

|EmployeeStatisticsTable |

|EmployeeIDNo |Salary |Benefits |Position |

|010 |75000 |15000  |Manager |

|105 |65000 |15000  |Manager |

|152 |60000 |15000  |Manager |

|215 |60000 |12500  |Manager |

|244 |50000 |12000  |Staff |

|300 |45000 |10000  |Staff |

|335 |40000 |10000  |Staff |

|400 |32000 |7500  |Entry-Level |

|441 |28000 |7500  |Entry-Level |

[pic]

1 Relational Operators

There are six Relational Operators in SQL, and after introducing them, we'll see how they're used:

|= |Equal |

|< or != (see manual) |Not Equal  |

|< |Less Than |

|> |Greater Than |

|= |Greater Than or Equal To  |

The WHERE clause is used to specify that only certain rows of the table are displayed, based on the criteria described in that WHERE clause. It is most easily understood by looking at a couple of examples.

If you wanted to see the EMPLOYEEIDNO's of those making at or over $50,000, use the following:

SELECT EMPLOYEEIDNO

FROM EMPLOYEESTATISTICSTABLE

WHERE SALARY >= 50000;

Notice that the >= (greater than or equal to) sign is used, as we wanted to see those who made greater than $50,000, or equal to $50,000, listed together. This displays:

EMPLOYEEIDNO

------------

010

105

152

215

244

The WHERE description, SALARY >= 50000, is known as a condition (an operation which evaluates to True or False). The same can be done for text columns:

SELECT EMPLOYEEIDNO

FROM EMPLOYEESTATISTICSTABLE

WHERE POSITION = 'Manager';

This displays the ID Numbers of all Managers. Generally, with text columns, stick to equal to or not equal to, and make sure that any text that appears in the statement is surrounded by single quotes ('). Note: Position is now an illegal identifier because it is now an unused, but reserved, keyword in the SQL-92 standard. 

[pic]

More Complex Conditions: Compound Conditions / Logical Operators

The AND operator joins two or more conditions, and displays a row only if that row's data satisfies ALL conditions listed (i.e. all conditions hold true). For example, to display all staff making over $40,000, use:

SELECT EMPLOYEEIDNO

FROM EMPLOYEESTATISTICSTABLE

WHERE SALARY > 40000 AND POSITION = 'Staff'; z

The OR operator joins two or more conditions, but returns a row if ANY of the conditions listed hold true. To see all those who make less than $40,000 or have less than $10,000 in benefits, listed together, use the following query:

SELECT EMPLOYEEIDNO

FROM EMPLOYEESTATISTICSTABLE

WHERE SALARY < 40000 OR BENEFITS < 10000;

AND & OR can be combined, for example:

SELECT EMPLOYEEIDNO

FROM EMPLOYEESTATISTICSTABLE

WHERE POSITION = 'Manager' AND SALARY > 60000 OR BENEFITS > 12000;

First, SQL finds the rows where the salary is greater than $60,000 and the position column is equal to Manager, then taking this new list of rows, SQL then sees if any of these rows satisfies the previous AND condition or the condition that the Benefits column is greater than $12,000. Subsequently, SQL only displays this second new list of rows, keeping in mind that anyone with Benefits over $12,000 will be included as the OR operator includes a row if either resulting condition is True. Also note that the AND operation is done first.

To generalize this process, SQL performs the AND operation(s) to determine the rows where the AND operation(s) hold true (remember: all of the conditions are true), then these results are used to compare with the OR conditions, and only display those remaining rows where any of the conditions joined by the OR operator hold true (where a condition or result from an AND is paired with another condition or AND result to use to evaluate the OR, which evaluates to true if either value is true). Mathematically, SQL evaluates all of the conditions, then evaluates the AND "pairs", and then evaluates the OR's (where both operators evaluate left to right).

To look at an example, for a given row for which the DBMS is evaluating the SQL statement Where clause to determine whether to include the row in the query result (the whole Where clause evaluates to True), the DBMS has evaluated all of the conditions, and is ready to do the logical comparisons on this result:

True AND False OR True AND True OR False AND False

First simplify the AND pairs:

False OR True OR False

Now do the OR's, left to right:

True OR False

True

The result is True, and the row passes the query conditions. Be sure to see the next section on NOT's, and the order of logical operations. I hope that this section has helped you understand AND's or OR's, as it's a difficult subject to explain briefly.

To perform OR's before AND's, like if you wanted to see a list of employees making a large salary ($50,000) or have a large benefit package ($10,000), and that happen to be a manager, use parentheses:

SELECT EMPLOYEEIDNO

FROM EMPLOYEESTATISTICSTABLE

WHERE POSITION = 'Manager' AND (SALARY > 50000 OR BENEFITS > 10000);

[pic]

IN & BETWEEN

An easier method of using compound conditions uses IN or BETWEEN. For example, if you wanted to list all managers and staff:

SELECT EMPLOYEEIDNO

FROM EMPLOYEESTATISTICSTABLE

WHERE POSITION IN ('Manager', 'Staff');

or to list those making greater than or equal to $30,000, but less than or equal to $50,000, use:

SELECT EMPLOYEEIDNO

FROM EMPLOYEESTATISTICSTABLE

WHERE SALARY BETWEEN 30000 AND 50000;

To list everyone not in this range, try:

SELECT EMPLOYEEIDNO

FROM EMPLOYEESTATISTICSTABLE

WHERE SALARY NOT BETWEEN 30000 AND 50000;

Similarly, NOT IN lists all rows excluded from the IN list.

Additionally, NOT's can be thrown in with AND's & OR's, except that NOT is a unary operator (evaluates one condition, reversing its value, whereas, AND's & OR's evaluate two conditions), and that all NOT's are performed before any AND's or OR's.

SQL Order of Logical Operations (each operates from left to right)

1. NOT

2. AND

3. OR

[pic]

Using LIKE

Look at the EmployeeStatisticsTable, and say you wanted to see all people whose last names started with "S"; try:

SELECT EMPLOYEEIDNO

FROM EMPLOYEEADDRESSTABLE

WHERE LASTNAME LIKE 'S%';

The percent sign (%) is used to represent any possible character (number, letter, or punctuation) or set of characters that might appear after the "S". To find those people with LastName's ending in "S", use '%S', or if you wanted the "S" in the middle of the word, try '%S%'. The '%' can be used for any characters in the same position relative to the given characters. NOT LIKE displays rows not fitting the given description. Other possibilities of using LIKE, or any of these discussed conditionals, are available, though it depends on what DBMS you are using; as usual, consult a manual or your system manager or administrator for the available features on your system, or just to make sure that what you are trying to do is available and allowed. This disclaimer holds for the features of SQL that will be discussed below. This section is just to give you an idea of the possibilities of queries that can be written in SQL.

[pic]

Joins

In this section, we will only discuss inner joins, and equijoins, as in general, they are the most useful. For more information, try the SQL links at the bottom of the page.

Good database design suggests that each table lists data only about a single entity, and detailed information can be obtained in a relational database, by using additional tables, and by using a join.

First, take a look at these example tables:

AntiqueOwners

|OwnerID |OwnerLastName |OwnerFirstName |

|01 |Jones |Bill |

|02 |Smith |Bob |

|15 |Lawson |Patricia |

|21 |Akins |Jane |

|50 |Fowler |Sam |

[pic]

Orders

|OwnerID |ItemDesired |

|02 |Table |

|02 |Desk |

|21 |Chair |

|15 |Mirror |

[pic]

Antiques

|SellerID |BuyerID |Item |

|01 |50 |Bed |

|02 |15 |Table |

|15 |02 |Chair |

|21 |50 |Mirror |

|50 |01 |Desk |

|01 |21 |Cabinet |

|02 |21 |Coffee Table |

|15 |50 |Chair |

|01 |15 |Jewelry Box |

|02 |21 |Pottery |

|21 |02 |Bookcase |

|50 |01 |Plant Stand |

[pic]

Keys

First, let's discuss the concept of keys. A primary key is a column or set of columns that uniquely identifies the rest of the data in any given row. For example, in the AntiqueOwners table, the OwnerID column uniquely identifies that row. This means two things: no two rows can have the same OwnerID, and, even if two owners have the same first and last names, the OwnerID column ensures that the two owners will not be confused with each other, because the unique OwnerID column will be used throughout the database to track the owners, rather than the names.

A foreign key is a column in a table where that column is a primary key of another table, which means that any data in a foreign key column must have corresponding data in the other table where that column is the primary key. In DBMS-speak, this correspondence is known as referential integrity. For example, in the Antiques table, both the BuyerID and SellerID are foreign keys to the primary key of the AntiqueOwners table (OwnerID; for purposes of argument, one has to be an Antique Owner before one can buy or sell any items), as, in both tables, the ID rows are used to identify the owners or buyers and sellers, and that the OwnerID is the primary key of the AntiqueOwners table. In other words, all of this "ID" data is used to refer to the owners, buyers, or sellers of antiques, themselves, without having to use the actual names.

[pic]

Performing a Join

The purpose of these keys is so that data can be related across tables, without having to repeat data in every table--this is the power of relational databases. For example, you can find the names of those who bought a chair without having to list the full name of the buyer in the Antiques table...you can get the name by relating those who bought a chair with the names in the AntiqueOwners table through the use of the OwnerID, which relates the data in the two tables. To find the names of those who bought a chair, use the following query:

SELECT OWNERLASTNAME, OWNERFIRSTNAME

FROM ANTIQUEOWNERS, ANTIQUES

WHERE BUYERID = OWNERID AND ITEM = 'Chair';

Note the following about this query...notice that both tables involved in the relation are listed in the FROM clause of the statement. In the WHERE clause, first notice that the ITEM = 'Chair' part restricts the listing to those who have bought (and in this example, thereby own) a chair. Secondly, notice how the ID columns are related from one table to the next by use of the BUYERID = OWNERID clause. Only where ID's match across tables and the item purchased is a chair (because of the AND), will the names from the AntiqueOwners table be listed. Because the joining condition used an equal sign, this join is called an equijoin. The result of this query is two names: Smith, Bob & Fowler, Sam.

Dot notation refers to prefixing the table names to column names, to avoid ambiguity, as follows:

SELECT ANTIQUEOWNERS.OWNERLASTNAME, ANTIQUEOWNERS.OWNERFIRSTNAME

FROM ANTIQUEOWNERS, ANTIQUES

WHERE ANTIQUES.BUYERID = ANTIQUEOWNERS.OWNERID AND ANTIQUES.ITEM = 'Chair';

As the column names are different in each table, however, this wasn't necessary.

[pic]

DISTINCT and Eliminating Duplicates

Let's say that you want to list the ID and names of only those people who have sold an antique. Obviously, you want a list where each seller is only listed once--you don't want to know how many antiques a person sold, just the fact that this person sold one (for counts, see the Aggregate Function section below). This means that you will need to tell SQL to eliminate duplicate sales rows, and just list each person only once. To do this, use the DISTINCT keyword.

First, we will need an equijoin to the AntiqueOwners table to get the detail data of the person's LastName and FirstName. However, keep in mind that since the SellerID column in the Antiques table is a foreign key to the AntiqueOwners table, a seller will only be listed if there is a row in the AntiqueOwners table listing the ID and names. We also want to eliminate multiple occurrences of the SellerID in our listing, so we use DISTINCT on the column where the repeats may occur (however, it is generally not necessary to strictly put the Distinct in front of the column name).

To throw in one more twist, we will also want the list alphabetized by LastName, then by FirstName (on a LastName tie). Thus, we will use the ORDER BY clause:

SELECT DISTINCT SELLERID, OWNERLASTNAME, OWNERFIRSTNAME

FROM ANTIQUES, ANTIQUEOWNERS

WHERE SELLERID = OWNERID

ORDER BY OWNERLASTNAME, OWNERFIRSTNAME;

In this example, since everyone has sold an item, we will get a listing of all of the owners, in alphabetical order by last name. For future reference (and in case anyone asks), this type of join is considered to be in the category of inner joins.

[pic]

Aliases & In/Subqueries

In this section, we will talk about Aliases, In and the use of subqueries, and how these can be used in a 3-table example. First, look at this query which prints the last name of those owners who have placed an order and what the order is, only listing those orders which can be filled (that is, there is a buyer who owns that ordered item):

SELECT OWN.OWNERLASTNAME Last Name, ORD.ITEMDESIRED Item Ordered

FROM ORDERS ORD, ANTIQUEOWNERS OWN

WHERE ORD.OWNERID = OWN.OWNERID

AND ORD.ITEMDESIRED IN

(SELECT ITEM

FROM ANTIQUES);

This gives:

Last Name Item Ordered

--------- ------------

Smith     Table

Smith     Desk

Akins     Chair

Lawson    Mirror

There are several things to note about this query:

1. First, the "Last Name" and "Item Ordered" in the Select lines gives the headers on the report.

2. The OWN & ORD are aliases; these are new names for the two tables listed in the FROM clause that are used as prefixes for all dot notations of column names in the query (see above). This eliminates ambiguity, especially in the equijoin WHERE clause where both tables have the column named OwnerID, and the dot notation tells SQL that we are talking about two different OwnerID's from the two different tables.

3. Note that the Orders table is listed first in the FROM clause; this makes sure listing is done off of that table, and the AntiqueOwners table is only used for the detail information (Last Name).

4. Most importantly, the AND in the WHERE clause forces the In Subquery to be invoked ("= ANY" or "= SOME" are two equivalent uses of IN). What this does is, the subquery is performed, returning all of the Items owned from the Antiques table, as there is no WHERE clause. Then, for a row from the Orders table to be listed, the ItemDesired must be in that returned list of Items owned from the Antiques table, thus listing an item only if the order can be filled from another owner. You can think of it this way: the subquery returns a set of Items from which each ItemDesired in the Orders table is compared; the In condition is true only if the ItemDesired is in that returned set from the Antiques table.

5. Also notice, that in this case, that there happened to be an antique available for each one desired...obviously, that won't always be the case. In addition, notice that when the IN, "= ANY", or "= SOME" is used, that these keywords refer to any possible row matches, not column matches...that is, you cannot put multiple columns in the subquery Select clause, in an attempt to match the column in the outer Where clause to one of multiple possible column values in the subquery; only one column can be listed in the subquery, and the possible match comes from multiple row values in that one column, not vice-versa.

Whew! That's enough on the topic of complex SELECT queries for now. Now on to other SQL statements.

[pic]

Miscellaneous SQL Statements

Aggregate Functions

I will discuss five important aggregate functions: SUM, AVG, MAX, MIN, and COUNT. They are called aggregate functions because they summarize the results of a query, rather than listing all of the rows.

• SUM () gives the total of all the rows, satisfying any conditions, of the given column, where the given column is numeric.

• AVG () gives the average of the given column.

• MAX () gives the largest figure in the given column.

• MIN () gives the smallest figure in the given column.

• COUNT(*) gives the number of rows satisfying the conditions.

Looking at the tables at the top of the document, let's look at three examples:

SELECT SUM(SALARY), AVG(SALARY)

FROM EMPLOYEESTATISTICSTABLE;

This query shows the total of all salaries in the table, and the average salary of all of the entries in the table.

SELECT MIN(BENEFITS)

FROM EMPLOYEESTATISTICSTABLE

WHERE POSITION = 'Manager';

This query gives the smallest figure of the Benefits column, of the employees who are Managers, which is 12500.

SELECT COUNT(*)

FROM EMPLOYEESTATISTICSTABLE

WHERE POSITION = 'Staff';

This query tells you how many employees have Staff status (3).

[pic]

Views

In SQL, you might (check your DBA) have access to create views for yourself. What a view does is to allow you to assign the results of a query to a new, personal table, that you can use in other queries, where this new table is given the view name in your FROM clause. When you access a view, the query that is defined in your view creation statement is performed (generally), and the results of that query look just like another table in the query that you wrote invoking the view. For example, to create a view:

CREATE VIEW ANTVIEW AS SELECT ITEMDESIRED FROM ORDERS;

Now, write a query using this view as a table, where the table is just a listing of all Items Desired from the Orders table:

SELECT SELLERID

FROM ANTIQUES, ANTVIEW

WHERE ITEMDESIRED = ITEM;

This query shows all SellerID's from the Antiques table where the Item in that table happens to appear in the Antview view, which is just all of the Items Desired in the Orders table. The listing is generated by going through the Antique Items one-by-one until there's a match with the Antview view. Views can be used to restrict database access, as well as, in this case, simplify a complex query.

[pic]

Creating New Tables

All tables within a database must be created at some point in time...let's see how we would create the Orders table:

CREATE TABLE ORDERS

(OWNERID INTEGER NOT NULL,

ITEMDESIRED CHAR(40) NOT NULL);

This statement gives the table name and tells the DBMS about each column in the table. Please note that this statement uses generic data types, and that the data types might be different, depending on what DBMS you are using. As usual, check local listings. Some common generic data types are:

• Char(x) - A column of characters, where x is a number designating the maximum number of characters allowed (maximum length) in the column.

• Integer - A column of whole numbers, positive or negative.

• Decimal(x, y) - A column of decimal numbers, where x is the maximum length in digits of the decimal numbers in this column, and y is the maximum number of digits allowed after the decimal point. The maximum (4,2) number would be 99.99.

• Date - A date column in a DBMS-specific format.

• Logical - A column that can hold only two values: TRUE or FALSE.

One other note, the NOT NULL means that the column must have a value in each row. If NULL was used, that column may be left empty in a given row.

[pic]

Altering Tables

Let's add a column to the Antiques table to allow the entry of the price of a given Item (Parentheses optional):

ALTER TABLE ANTIQUES ADD (PRICE DECIMAL(8,2) NULL);

The data for this new column can be updated or inserted as shown later.

[pic]

Adding Data

To insert rows into a table, do the following:

INSERT INTO ANTIQUES VALUES (21, 01, 'Ottoman', 200.00);

This inserts the data into the table, as a new row, column-by-column, in the pre-defined order. Instead, let's change the order and leave Price blank:

INSERT INTO ANTIQUES (BUYERID, SELLERID, ITEM)

VALUES (01, 21, 'Ottoman');

[pic]

Deleting Data

Let's delete this new row back out of the database:

DELETE FROM ANTIQUES

WHERE ITEM = 'Ottoman';

But if there is another row that contains 'Ottoman', that row will be deleted also. Let's delete all rows (one, in this case) that contain the specific data we added before:

DELETE FROM ANTIQUES

WHERE ITEM = 'Ottoman' AND BUYERID = 01 AND SELLERID = 21;

[pic]

Updating Data

Let's update a Price into a row that doesn't have a price listed yet:

UPDATE ANTIQUES SET PRICE = 500.00 WHERE ITEM = 'Chair';

This sets all Chair's Prices to 500.00. As shown above, more WHERE conditionals, using AND, must be used to limit the updating to more specific rows. Also, additional columns may be set by separating equal statements with commas.

[pic]

Miscellaneous Topics

Indexes

Indexes allow a DBMS to access data quicker (please note: this feature is nonstandard/not available on all systems). The system creates this internal data structure (the index) which causes selection of rows, when the selection is based on indexed columns, to occur faster. This index tells the DBMS where a certain row is in the table given an indexed-column value, much like a book index tells you what page a given word appears. Let's create an index for the OwnerID in the AntiqueOwners table:

CREATE INDEX OID_IDX ON ANTIQUEOWNERS (OWNERID);

Now on the names:

CREATE INDEX NAME_IDX ON ANTIQUEOWNERS (OWNERLASTNAME, OWNERFIRSTNAME);

To get rid of an index, drop it:

DROP INDEX OID_IDX;

By the way, you can also "drop" a table, as well (careful!--that means that your table is deleted). In the second example, the index is kept on the two columns, aggregated together--strange behavior might occur in this situation...check the manual before performing such an operation.

Some DBMS's do not enforce primary keys; in other words, the uniqueness of a column is not enforced automatically. What that means is, if, for example, I tried to insert another row into the AntiqueOwners table with an OwnerID of 02, some systems will allow me to do that, even though we do not, as that column is supposed to be unique to that table (every row value is supposed to be different). One way to get around that is to create a unique index on the column that we want to be a primary key, to force the system to enforce prohibition of duplicates:

CREATE UNIQUE INDEX OID_IDX ON ANTIQUEOWNERS (OWNERID);

[pic]

GROUP BY & HAVING

One special use of GROUP BY is to associate an aggregate function (especially COUNT; counting the number of rows in each group) with groups of rows. First, assume that the Antiques table has the Price column, and each row has a value for that column. We want to see the price of the most expensive item bought by each owner. We have to tell SQL to group each owner's purchases, and tell us the maximum purchase price:

SELECT BUYERID, MAX(PRICE)

FROM ANTIQUES

GROUP BY BUYERID;

Now, say we only want to see the maximum purchase price if the purchase is over $1000, so we use the HAVING clause:

SELECT BUYERID, MAX(PRICE)

FROM ANTIQUES

GROUP BY BUYERID

HAVING PRICE > 1000;

[pic]

More Subqueries

Another common usage of subqueries involves the use of operators to allow a Where condition to include the Select output of a subquery. First, list the buyers who purchased an expensive item (the Price of the item is $100 greater than the average price of all items purchased):

SELECT BUYERID

FROM ANTIQUES

WHERE PRICE >

(SELECT AVG(PRICE) + 100

FROM ANTIQUES);

The subquery calculates the average Price, plus $100, and using that figure, an OwnerID is printed for every item costing over that figure. One could use DISTINCT BUYERID, to eliminate duplicates.

List the Last Names of those in the AntiqueOwners table, ONLY if they have bought an item:

SELECT OWNERLASTNAME

FROM ANTIQUEOWNERS

WHERE OWNERID IN

(SELECT DISTINCT BUYERID

FROM ANTIQUES);

The subquery returns a list of buyers, and the Last Name is printed for an Antique Owner if and only if the Owner's ID appears in the subquery list (sometimes called a candidate list). Note: on some DBMS's, equals can be used instead of IN, but for clarity's sake, since a set is returned from the subquery, IN is the better choice.

For an Update example, we know that the gentleman who bought the bookcase has the wrong First Name in the database...it should be John:

UPDATE ANTIQUEOWNERS

SET OWNERFIRSTNAME = 'John'

WHERE OWNERID =

(SELECT BUYERID

FROM ANTIQUES

WHERE ITEM = 'Bookcase');

First, the subquery finds the BuyerID for the person(s) who bought the Bookcase, then the outer query updates his First Name.

Remember this rule about subqueries: when you have a subquery as part of a WHERE condition, the Select clause in the subquery must have columns that match in number and type to those in the Where clause of the outer query. In other words, if you have "WHERE ColumnName = (SELECT...);", the Select must have only one column in it, to match the ColumnName in the outer Where clause, and they must match in type (both being integers, both being character strings, etc.).

[pic]

EXISTS & ALL

EXISTS uses a subquery as a condition, where the condition is True if the subquery returns any rows, and False if the subquery does not return any rows; this is a nonintuitive feature with few unique uses. However, if a prospective customer wanted to see the list of Owners only if the shop dealt in Chairs, try:

SELECT OWNERFIRSTNAME, OWNERLASTNAME

FROM ANTIQUEOWNERS

WHERE EXISTS

(SELECT *

FROM ANTIQUES

WHERE ITEM = 'Chair');

If there are any Chairs in the Antiques column, the subquery would return a row or rows, making the EXISTS clause true, causing SQL to list the Antique Owners. If there had been no Chairs, no rows would have been returned by the outside query.

ALL is another unusual feature, as ALL queries can usually be done with different, and possibly simpler methods; let's take a look at an example query:

SELECT BUYERID, ITEM

FROM ANTIQUES

WHERE PRICE >= ALL

(SELECT PRICE

FROM ANTIQUES);

This will return the largest priced item (or more than one item if there is a tie), and its buyer. The subquery returns a list of all Prices in the Antiques table, and the outer query goes through each row of the Antiques table, and if its Price is greater than or equal to every (or ALL) Prices in the list, it is listed, giving the highest priced Item. The reason "=" must be used is that the highest priced item will be equal to the highest price on the list, because this Item is in the Price list.

[pic]

UNION & Outer Joins (briefly explained)

There are occasions where you might want to see the results of multiple queries together, combining their output; use UNION. To merge the output of the following two queries, displaying the ID's of all Buyers, plus all those who have an Order placed:

SELECT BUYERID

FROM ANTIQUES

UNION

SELECT OWNERID

FROM ORDERS;

Notice that SQL requires that the Select list (of columns) must match, column-by-column, in data type. In this case BuyerID and OwnerID are of the same data type (integer). Also notice that SQL does automatic duplicate elimination when using UNION (as if they were two "sets"); in single queries, you have to use DISTINCT.

The outer join is used when a join query is "united" with the rows not included in the join, and are especially useful if constant text "flags" are included. First, look at the query:

SELECT OWNERID, 'is in both Orders & Antiques'

FROM ORDERS, ANTIQUES

WHERE OWNERID = BUYERID

UNION

SELECT BUYERID, 'is in Antiques only'

FROM ANTIQUES

WHERE BUYERID NOT IN

(SELECT OWNERID

FROM ORDERS);

The first query does a join to list any owners who are in both tables, and putting a tag line after the ID repeating the quote. The UNION merges this list with the next list. The second list is generated by first listing those ID's not in the Orders table, thus generating a list of ID's excluded from the join query. Then, each row in the Antiques table is scanned, and if the BuyerID is not in this exclusion list, it is listed with its quoted tag. There might be an easier way to make this list, but it's difficult to generate the informational quoted strings of text.

This concept is useful in situations where a primary key is related to a foreign key, but the foreign key value for some primary keys is NULL. For example, in one table, the primary key is a salesperson, and in another table is customers, with their salesperson listed in the same row. However, if a salesperson has no customers, that person's name won't appear in the customer table. The outer join is used if the listing of all salespersons is to be printed, listed with their customers, whether the salesperson has a customer or not--that is, no customer is printed (a logical NULL value) if the salesperson has no customers, but is in the salespersons table. Otherwise, the salesperson will be listed with each customer.

Another important related point about Nulls having to do with joins: the order of tables listed in the From clause is very important. The rule states that SQL "adds" the second table to the first; the first table listed has any rows where there is a null on the join column displayed; if the second table has a row with a null on the join column, that row from the table listed second does not get joined, and thus included with the first table's row data. This is another occasion (should you wish that data included in the result) where an outer join is commonly used. The concept of nulls is important, and it may be worth your time to investigate them further.

ENOUGH QUERIES!!! you say?...now on to something completely different...

[pic]

WinRunner

1) How you used WinRunner in your project?

a) Yes, I have been WinRunner for creating automates scripts for GUI, functional and regression testing of the AUT.

2) Explain WinRunner testing process?

a) WinRunner testing process involves six main stages

i. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested

ii. Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.

iii. Debug Test: run tests in Debug mode to make sure they run smoothly

iv. Run Tests: run tests in Verify mode to test your application.

v. View Results: determines the success or failure of the tests.

vi. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

3) What in contained in the GUI map?

a) WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description.

b) There are 2 types of GUI Map files.

i. Global GUI Map file: a single GUI Map file for the entire application

ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

4) How does WinRunner recognize objects on the application?

a) WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.

5) Have you created test scripts and what is contained in the test scripts?

a) Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.

6) How does WinRunner evaluates test results?

a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

7) Have you performed debugging of the scripts?

a) Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

8) How do you run your test scripts?

a) We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches

b) are found, WinRunner captures them as actual results.

9) How do you analyze results and report the defects?

a) Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

10) What is the use of TestDirector software?

a) TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.

11) How you integrated your automated scripts from TestDirector?

a) When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT.

12) What are the different modes of recording?

a) There are two type of recording in WinRunner.

i. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.

ii. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

13) What is the purpose of loading WinRunner Add-Ins?

a) Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function.

14) What are the reasons that WinRunner fails to identify an object on the GUI?

a) WinRunner fails to identify an object in a GUI due to various reasons.

i. The object is not a standard windows object.

ii. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.

15) What do you mean by the logical name of the object.

a) An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.

16) If the object does not have a name then what will be the logical name?

a) If the object does not have a name then the logical name could be the attached text.

17) What is the different between GUI map and GUI map files?

a) The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files.

i. Global GUI Map file: a single GUI Map file for the entire application

ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

b) GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.

18) How do you view the contents of the GUI map?

a) GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

19) When you create GUI map do you record all the objects of specific objects?

a) If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.

20) What is the purpose of set_window command?

a) Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.

Syntax: set_window(, time);

The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.

21) How do you load GUI map?

a) We can load a GUI Map by using the GUI_load command.

Syntax: GUI_load();

22) What is the disadvantage of loading the GUI maps through start up scripts?

a) If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high.

b) If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it.

23) How do you unload the GUI map?

a) We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.

Syntax: GUI_close(); or GUI_close_all;

24) What actually happens when you load GUI map?

a) When we load a GUI Map file, the information about the windows and the objects with their logical names and physical description are loaded into memory. So when the WinRunner executes a script on a particular window, it can identify the objects using this information loaded in the memory.

25) What is the purpose of the temp GUI map file?

a) While recording a script, WinRunner learns objects and windows by itself. This is actually stored into the temporary GUI Map file. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options.

26) What is the extension of gui map file?

a) The extension for a GUI Map file is “.gui”.

27) How do you find an object in an GUI map.

a) The GUI Map Editor is been provided with a Find and Show Buttons.

i. To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

ii. To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

28) What different actions are performed by find and show button?

a) To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.

b) To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

29) How do you identify which files are loaded in the GUI map?

a) The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded into the memory.

30) How do you modify the logical name or the physical description of the objects in GUI map?

a) You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.

31) When do you feel you need to modify the logical name?

a) Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.

32) When it is appropriate to change physical description?

a) Changing the physical description is necessary when the property value of an object changes.

33) How WinRunner handles varying window labels?

a) We can handle varying window labels using regular expressions. WinRunner uses two “hidden” properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.

i. The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.

ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

[

34) What is the purpose of regexp_label property and regexp_MSW_class property?

a) The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.

b) The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

35) How do you suppress a regular expression?

a) We can suppress the regular expression of a window by replacing the regexp_label property with label property.

36) How do you copy and move objects between different GUI map files?

a) We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:

i. Choose Tools > GUI Map Editor to open the GUI Map Editor.

ii. Choose View > GUI Files.

iii. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.

iv. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.

v. In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.

vi. Click Copy or Move.

vii. To restore the GUI Map Editor to its original size, click Collapse.

37) How do you select multiple objects during merging the files?

a) Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.

38) How do you clear a GUI map files?

a) We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor.

39) How do you filter the objects in the GUI map?

a) GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.

i. Logical name displays only objects with the specified logical name.

ii. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.

iii. Class displays only objects of the specified class, such as all the push buttons.

40) How do you configure GUI map?

a) When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object.

b) Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic “object” class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script.

c) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

41) What is the purpose of GUI map configuration?

a) GUI Map configuration is used to map a custom object to a standard object.

42) How do you make the configuration and mappings permanent?

a) The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

43) What is the purpose of GUI spy?

a) Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.

44) What is the purpose of obligatory and optional properties of the objects?

a) For each class, WinRunner learns a set of default properties. Each default property is classified “obligatory” or “optional”.

i. An obligatory property is always learned (if it exists).

ii. An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object.

45) When the optional properties are learned?

a) An optional property is used only if the obligatory properties do not provide unique identification of an object.

46) What is the purpose of location indicator and index indicator in GUI map configuration?

a) In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:

i. A location selector uses the spatial position of objects.

1. The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description.

ii. An index selector uses a unique number to identify the object in a window.

1. The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window.

47) How do you handle custom objects?

a) A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic “object” class. WinRunner records operations on custom objects using obj_mouse_ statements.

b) If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing.

48) What is the name of custom class in WinRunner and what methods it applies on the custom objects?

a) WinRunner learns custom class objects under the generic “object” class. WinRunner records operations on custom objects using obj_ statements.

49) In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?

a) In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:

i. A location selector uses the spatial position of objects.

ii. An index selector uses a unique number to identify the object in a window.

50) What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore.

a) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)

b) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.

c) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were “object” class.

d) Ignore instructs WinRunner to disregard all operations performed on the class.

51) How do you find out which is the start up file in WinRunner?

a) The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start up file in WinRunner.

52) What are the virtual objects and how do you learn them?

a) Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and run tests.

b) Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.

To define a virtual object using the Virtual Object wizard:

i. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.

ii. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next.

iii. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If the object marked is visible on the screen, you can click the Highlight button to view it. Click Next.

iv. Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc.

v. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the same name before confirming your choice. Click Next.

53) How you created you test scripts 1) by recording or 2) programming?

a) Programming. I have done complete programming only, absolutely no recording.

54) What are the two modes of recording?

a) There are 2 modes of recording in WinRunner

i. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.

ii. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

55) What is a checkpoint and what are different types of checkpoints?

a) Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version.

You can add four types of checkpoints to your test scripts:

i. GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list.

ii. Bitmap checkpoints take a “snapshot” of a window or area of your application and compare this to an image captured in an earlier version.

iii. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.

iv. Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database.

56) What are data driven tests?

a) When you test your application, you may want to check how it performs the same operations with multiple sets of data. You can create a data-driven test with a loop that runs ten times: each time the loop runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table. You can perform these operations manually, or you can use the DataDriver Wizard to parameterize your test and store the data in a data table.

57) What are the synchronization points?

a) Synchronization points enable you to solve anticipated timing problems between the test and your application. For example, if you create a test that opens a database application, you can add a synchronization point that causes the test to wait until the database records are loaded on the screen.

b) For Analog testing, you can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. When you run a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window.

58) What is parameterizing?

a) In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table.

59) How do you maintain the document information of the test scripts?

a) Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document.

60) What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?

a) You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:

i. button_check_info

ii. scroll_check_info

iii. edit_check_info

iv. static_check_info

v. list_check_info

vi. win_check_info

vii. obj_check_info

Syntax: button_check_info (button, property, property_value );

edit_check_info ( edit, property, property_value );

61) What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?

a) You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or you can specify which properties to check.

b) Creating a GUI Checkpoint using the Default Checks

i. You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.

ii. To create a GUI checkpoint using default checks:

1. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

2. Click an object.

3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement

Syntax: win_check_gui ( window, checklist, expected_results_file, time );

c) Creating a GUI Checkpoint by Specifying which Properties to Check

d) You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that it is in focus, instead of enabled.

e) To create a GUI checkpoint by specifying which properties to check:

i. Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.

ii. Double-click the object or window. The Check GUI dialog box opens.

iii. Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.

iv. Select the properties you want to check.

1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.

3. To change the viewing options for the properties of an object, use the Show Properties buttons.

4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement.

Syntax: win_check_gui ( window, checklist, expected_results_file, time );

obj_check_gui ( object, checklist, expected results file, time );

62) What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?

a) To create a GUI checkpoint for two or more objects:

i. Choose Create > GUI Checkpoint > For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens.

ii. Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.

iii. To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window.

iv. The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check.

v. Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens.

vi. The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.

1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.

2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.

3. To change the viewing options for the properties of an object, use the Show Properties buttons.

vii. To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script.

Syntax: win_check_gui ( window, checklist, expected_results_file, time );

obj_check_gui ( object, checklist, expected results file, time );

63) What information is contained in the checklist file and in which file expected results are stored?

a) The checklist file contains information about the objects and the properties of the object we are verifying.

b) The gui*.chk file contains the expected results which is stored in the exp folder

64) What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?

a) You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), you can identify the nature of the discrepancy.

b) When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.

c) Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to check a bitmap.

d) To capture a window or object as a bitmap:

i. Choose Create > Bitmap Checkpoint > For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens.

ii. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax:

win_check_bitmap ( object, bitmap, time );

iii. For an object bitmap, the syntax is:

obj_check_bitmap ( object, bitmap, time );

iv. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be:

win_check_bitmap ("Flight Reservation", "Img2", 1);

v. However, if you click the Date of Flight box in the same window, the statement might be:

obj_check_bitmap ("Date of Flight:", "Img1", 1);

Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );

65) What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?

a) You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window).

b) To capture an area of the screen as a bitmap:

i. Choose Create > Bitmap Checkpoint > For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens.

ii. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.

iii. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script.

iv. The win_check_bitmap statement for an area of the screen has the following syntax:

win_check_bitmap ( window, bitmap, time, x, y, width, height );

66) What do you verify with the database checkpoint default and what command it generates, explain syntax?

a) By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application.

b) When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query.

c) You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you

d) specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found.

e) You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run.

Syntax: db_check(, );

f) You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script.

Syntax:

db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );

ChecklistFileName A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard.

SuccessConditions Contains one of the following values:

1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found.

2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found.

3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found.

RecordNumber An out parameter returning the number of records in the database.

67) How do you handle dynamically changing area of the window in the bitmap checkpoints?

a) The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch

68) What do you verify with the database check point custom and what command it generates, explain syntax?

a) When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set.

b) You can create a custom check on a database in order to:

i. check the contents of part or the entire result set

ii. edit the expected results of the contents of the result set

iii. count the rows in the result set

iv. count the columns in the result set

c) You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.

69) What do you verify with the sync point for object/window property and what command it generates, explain syntax?

a) Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.

b) You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object.

c) You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax:

Syntax:

obj_exists ( object [, time ] );

win_exists ( window [, time ] );

70) What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?

a) You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.

b) During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.

Syntax:

obj_wait_bitmap ( object, image, time );

win_wait_bitmap ( window, image, time );

71) What do you verify with the sync point for screen area and what command it generates, explain syntax?

a) For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution

Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);

72) How do you edit checklist file and when do you need to edit the checklist file?

a) WinRunner has an edit checklist file option under the create menu. Select the “Edit GUI Checklist” to modify GUI checklist file and “Edit Database Checklist” to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.

73) How do you edit the expected value of an object?

a) We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.

74) How do you modify the expected results of a GUI checkpoint?

a) We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.

75) How do you handle ActiveX and Visual basic objects?

a) WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.

76) How do you create ODBC query?

a) We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.

77) How do you record a data driven test?

a) We can create a data-driven testing using data from a flat file, data table or a database.

i. Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data.

ii. Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table.

iii. Database: we store test data in the database and access these data using ‘db_*’ functions.

78) How do you convert a database file to a text file?

a) You can use Data Junction to create a conversion file which converts a database to a target text file.

79) How do you parameterize database check points?

a) When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes.

80) How do you create parameterize SQL commands?

a) A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application:

i. SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)

SELECT defines the columns to include in the query.

FROM specifies the path of the database.

WHERE (optional) specifies the conditions, or filters to use in the query.

Departure is the parameter that represents the departure point of a flight.

Day_Of_Week is the parameter that represents the day of the week of a flight.

b) When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script:

db_check("list1.cdl", "dbvf1", NO_LIMIT, dbvf1_params);

The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.

81) Explain the following commands:

a) db_connect

i. to connect to a database

db_connect(, );

b) db_execute_query

i. to execute a query

db_execute_query ( session_name, SQL, record_number );

record_number is the out value.

c) db_get_field_value

i. returns the value of a single field in the specified row_index and column in the session_name database session.

db_get_field_value ( session_name, row_index, column );

d) db_get_headers

i. returns the number of column headers in a query and the content of the column headers, concatenated and delimited by tabs.

db_get_headers ( session_name, header_count, header_content );

e) db_get_row

i. returns the content of the row, concatenated and delimited by tabs.

db_get_row ( session_name, row_index, row_content );

f) db_write_records

i. writes the record set into a text file delimited by tabs.

db_write_records ( session_name, output_file [ , headers [ , record_limit ] ] );

g) db_get_last_error

i. returns the last error message of the last ODBC or Data Junction operation in the session_name database session.

db_get_last_error ( session_name, error );

h) db_disconnect

i. disconnects from the database and ends the database session.

db_disconnect ( session_name );

i) db_dj_convert

i. runs the djs_file Data Junction export file. When you run this file, the Data Junction Engine converts data from one spoke (source) to another (target). The optional parameters enable you to override the settings in the Data Junction export file.

db_dj_convert ( djs_file [ , output_file [ , headers [ , record_limit ] ] ] );

82) What check points you will use to read and check text on the GUI and explain its syntax?

a) You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text.

b) You can use a text checkpoint to:

i. Read text from a GUI object or window in your application, using obj_get_text and win_get_text

ii. Search for text in an object or window, using win_find_text and obj_find_text

iii. Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text

iv. Click on text in an object or window, using obj_click_on_text and win_click_on_text

83) Explain Get Text checkpoint from object/window with syntax?

a) We use obj_get_text (, ) function to get the text from an object

b) We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

84) Explain Get Text checkpoint from screen area with syntax?

a) We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

85) Explain Get Text checkpoint from selection (web only) with syntax?

a) Returns a text string from an object.

web_obj_get_text (object, table_row, table_column, out_text [, text_before, text_after, index]);

i. object The logical name of the object.

ii. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the # character.

iii. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the # character.

iv. out_text The output variable that stores the text string.

v. text_before Defines the start of the search area for a particular text string.

vi. text_after Defines the end of the search area for a particular text string.

vii. index The occurrence number to locate. (The default parameter number is numbered 1).

86) Explain Get Text checkpoint web text checkpoint with syntax?

a) We use web_obj_text_exists function for web text checkpoints.

web_obj_text_exists ( object, table_row, table_column, text_to_find [, text_before, text_after] );

a. object The logical name of the object to search.

b. table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the character  #.

c. table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the character  #.

d. text_to_find The string that is searched for.

e. text_before Defines the start of the search area for a particular text string.

f. text_after Defines the end of the search area for a particular text string.

87) Which TSL functions you will use for

a) Searching text on the window

i. find_text ( string, out_coord_array, search_area [, string_def ] );

string The string that is searched for. The string must be complete, contain no spaces, and it must be preceded and followed by a space outside the quotation marks. To specify a literal, case-sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. In this case, the string variable can include a regular expression.

out_coord_array The name of the array that stores the screen coordinates of the text (see explanation below).

search_area The area to search, specified as coordinates x1,y1,x2,y2. These define any two diagonal corners of a rectangle. The interpreter searches for the text in the area defined by the rectangle.

string_def Defines the type of search to perform. If no value is specified, (0 or FALSE, the default), the search is for a single complete word only. When 1, or TRUE, is specified, the search is not restricted to a single, complete word.

b) getting the location of the text string

i. win_find_text ( window, string, result_array [, search_area [, string_def ] ] );

window The logical name of the window to search.

string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression. The regular expression should not include an exclamation mark (!), however, which is treated as a literal character. For more information regarding Regular Expressions, refer to the "Using Regular Expressions" chapter in your User's Guide.

result_array The name of the output variable that stores the location of the string as a four-element array.

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1,y1,x2,y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window is considered the search area.

string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

c) Moving the pointer to that text string

i. win_move_locator_text (window, string [ ,search_area [ ,string_def ] ] );

window The logical name of the window.

string The text to locate. To specify a literal, case sensitive string, enclose the string in quotation marks. Alternatively, you can specify the name of a string variable. The value of the string variable can include a regular expression (the regular expression need not begin with an exclamation mark).

search_area The region of the object to search, relative to the window. This area is defined as a pair of coordinates, with x1, y1, x2, y2 specifying any two diagonally opposite corners of the rectangular search region. If this parameter is not defined, then the entire window specified is considered the search area.

string_def Defines how the text search is performed. If no string_def is specified, (0 or FALSE, the default parameter), the interpreter searches for a complete word only. If 1, or TRUE, is specified, the search is not restricted to a single, complete word.

d) Comparing the text

i. compare_text (str1, str2 [, chars1, chars2]);

str1, str2 The two strings to be compared.

chars1 One or more characters in the first string.

chars2 One or more characters in the second string. These characters are substituted for those in chars1.

88) What are the steps of creating a data driven test?

a) The steps involved in data driven testing are:

i. Creating a test

ii. Converting to a data-driven test and preparing a database

iii. Running the test

iv. Analyzing the test results.

89) Record a data driven test script using data driver wizard?

a) You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-driven test. For example, your test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test script that you want to run in a loop with multiple sets of data.

To create a data-driven test:

i. If you want to turn only part of your test script into a data-driven test, first select those lines in the test script.

ii. Choose Tools > DataDriver Wizard.

iii. If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven test, click Next.

iv. The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use

v. The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.

vi. In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, “table.”

vii. At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table

viii. To the script at a later time without making changes throughout the script.

ix. Choose from among the following options:

1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data table; adds ddt_open and ddt_close statements

2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these statements to your test script manually.

3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close your datatable.

4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test script after the ddt_open statement.

5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package.

6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data.

7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table.

x. The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different argument to replace.

Choose whether and how to replace the selected data:

1. Do not replace this data: Does not parameterize this data.

2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list.

3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name.

xi. The final screen of the wizard opens.

1. If you want the data table to open after you close the wizard, select Show data table now.

2. To perform the tasks specified in previous screens and close the wizard, click Finish.

3. To close the wizard without making any changes to the test script, click Cancel.

90) What are the three modes of running the scripts?

a) WinRunner provides three modes in which to run tests—Verify, Debug, and Update. You use each mode during a different phase of the testing process.

i. Verify

1. Use the Verify mode to check your application.

ii. Debug

1. Use the Debug mode to help you identify bugs in a test script.

iii. Update

1. Use the Update mode to update the expected results of a test or to create a new expected results folder.

91) Explain the following TSL functions:

a) Ddt_open

i. Creates or opens a datatable file so that WinRunner can access it.

Syntax: ddt_open ( data_table_name, mode );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

mode The mode for opening the data table: DDT_MODE_READ (read-only) or DDT_MODE_READWRITE (read or write).

b) Ddt_save

i. Saves the information into a data file.

Syntax: dt_save (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

c) Ddt_close

i. Closes a data table file

Syntax: ddt_close ( data_table_name );

data_table_name The name of the data table. The data table is a Microsoft Excel file or a tabbed text file. The first row in the file contains the names of the parameters.

d) Ddt_export

i. Exports the information of one data table file into a different data table file.

Syntax: ddt_export (data_table_namename1, data_table_namename2);

data_table_namename1 The source data table filename.

data_table_namename2 The destination data table filename.

e) Ddt_show

i. Shows or hides the table editor of a specified data table.

Syntax: ddt_show (data_table_name [, show_flag]);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table.

show_flag The value indicating whether the editor should be shown (default=1) or hidden (0).

f) Ddt_get_row_count

i. Retrieves the no. of rows in a data tables

Syntax: ddt_get_row_count (data_table_name, out_rows_count);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

out_rows_count The output variable that stores the total number of rows in the data table.

g) ddt_next_row

i. Changes the active row in a database to the next row

Syntax: ddt_next_row (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

h) ddt_set_row

i. Sets the active row in a data table.

Syntax: ddt_set_row (data_table_name, row);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row The new active row in the data table.

i) ddt_set_val

i. Sets a value in the current row of the data table

Syntax: ddt_set_val (data_table_name, parameter, value);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

parameter The name of the column into which the value will be inserted.

value The value to be written into the table.

j) ddt_set_val_by_row

i. Sets a value in a specified row of the data table.

Syntax: ddt_set_val_by_row (data_table_name, row, parameter, value);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row The row number in the table. It can be any existing row or the current row number plus 1, which will add a new row to the data table.

parameter The name of the column into which the value will be inserted.

value The value to be written into the table.

k) ddt_get_current_row

i. Retrieves the active row of a data table.

Syntax: ddt_get_current_row ( data_table_name, out_row );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

out_row The output variable that stores the active row in the data table.

l) ddt_is_parameter

i. Returns whether a parameter in a datatable is valid

Syntax: ddt_is_parameter (data_table_name, parameter);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

parameter The parameter name to check in the data table.

m) ddt_get_parameters

i. Returns a list of all parameters in a data table.

Syntax: ddt_get_parameters ( table, params_list, params_num );

table The pathname of the data table.

params_list This out parameter returns the list of all parameters in the data table, separated by tabs.

params_num This out parameter returns the number of parameters in params_list.

n) ddt_val

i. Returns the value of a parameter in the active roe in a data table.

Syntax: ddt_val (data_table_name, parameter);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters.

parameter The name of the parameter in the data table.

o) ddt_val_by_row

i. Returns the value of a parameter in the specified row in a data table.

Syntax: ddt_val_by_row ( data_table_name, row_number, parameter );

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

row_number The number of the row in the data table.

parameter The name of the parameter in the data table.

p) ddt_report_row

i. Reports the active row in a data table to the test results

Syntax: ddt_report_row (data_table_name);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

q) ddt_update_from_db

i. imports data from a database into a data table. It is inserted into your test script when you select the Import data from a database option in the DataDriver Wizard. When you run your test, this function updates the data table with data from the database.

92) How do you handle unexpected events and errors?

a) WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.

WinRunner enables you to handle the following types of exceptions:

Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window.

TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code.

Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object.

Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a test run.

93) How do you handle pop-up exceptions?

a) A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be

i. Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box.

ii. User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box.

94) How do you handle TSL exceptions?

a) A TSL exception enables you to detect and respond to a specific error code returned during test execution.

b) Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch.

c) The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this function to respond to the unexpected error in the way that meets your specific testing needs.

d) Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file.

95) How do you handle object exceptions?

a) During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run and distort results.

b) You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution

96) How do you comment your script?

a) We comment a script or line of script by inserting a ‘#’ at the beginning of the line.

97) What is a compile module?

a) A compiled module is a script containing a library of user-defined functions that you want to call frequently from other tests. When you load a compiled module, its functions are automatically compiled and remain in memory. You can call them directly from within any test.

b) Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.

98) What is the difference between script and compile module?

a) Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable.

b) WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of “Compiled Module”.

c) By default, modules containing TSL code have a property value of "main". Main modules are called for execution from within other modules. Main modules are dynamically compiled into machine code only when WinRunner recognizes a "call" statement. Example of a call for the "app_init" script:

call cso_init();

call( "C:\\MyAppFolder\\" & "app_init" );

d) Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement:

reload (“C:\\MyAppFolder\\" & "flt_lib");

or

load ("C:\\MyAppFolder\\" & "flt_lib");

99) Write and explain various loop command?

a) A for loop instructs WinRunner to execute one or more statements a specified number of times.

It has the following syntax:

for ( [ expression1 ]; [ expression2 ]; [ expression3 ] )

statement

i. First, expression1 is executed. Next, expression2 is evaluated. If expression2 is true, statement is executed and expression3 is executed. The cycle is repeated as long as expression2 remains true. If expression2 is false, the for statement terminates and execution passes to the first statement immediately following.

ii. For example, the for loop below selects the file UI_TEST from the File Name list

iii. in the Open window. It selects this file five times and then stops.

set_window ("Open")

for (i=0; iReports>Standard Requirements Report.

4). Can we upload test cases from an excel sheet into TestDirector?

Yes, You can do that. Go to addIn menu in TestDirector, find the Excel add in, and install it in you machine. Now open excel, you can find the new menu option export to TestDirector . rest of the procedure is self explanatory

5). Can we export the files from TestDirector to Excel Sheet? If yes then how?

Requirement tab -- Right click on main req/ click on export/ save as word, excel or other template. This would save all the child requirement.

Test plan tab-- only individual test can be exported. No parent--child export is possible. Select a test script. Click on the design steps tab. right click anywhere on the open window. Click on export and save as....

Test lab tab-- select a child group. Click on execution grid if it is not selected. right click anywhere . Default save option is excel. but can be saved in doc and other formats. select 'all' or 'selected' option.

Defects tab -- right click anywhere on the window, export all or 'selected' defects and save excel sheet or document.

6). I use TestDirector 8.0 SP2 and I want to customize the reports generated. I understood that it is possible to use VB scripts with TestDirector. Can somebody give me an example of a VB script used with TestDirector? Or another solution to customize my report. I used the filters in TestDirector but I need a deeper customization.

This depends a lot of what you are interested in "reporting on". You have to combine both SQL and VB script to extract data from TD to Excel. To give an example on "general purpose" is therefore somewhat dificult.

Its also possible to "customize" the standard reports given from the "analyze" tab, this is written in XML if you are familiar with this language.

If you log in to Mercury support you will be able to find a lot of code examples.

7). How Many Types tab in TestDirector and explain it

There are 4 tabs available in TestDirector.

1. Requirement -> to track the customer requirenments

2.Testplan -> to design the testcases & to store the testscripts

3.Testlab -> to exectue  the testsets & track the results 

4.Defect -> to log a defect & to track the logged defects

8). How to map requirements with testcases in TestDirector?

1. In requirements TAB select coverage view.

2. Select requirement by clicking on Parent/Child or grand Child.

3. On right hand side (in Coverage View Window) another window will appear. It has two TABS (a) Tests Coverage (b) Details. Test Coverage TAB will be selected by default or you click it.

4.Click on Select Tests Button. A new window will appear on right hand side and you will see a list of all tests. You can select any test case you want to map with you requirement.

9). How to use TestDirector in real time project

Once completed the preparing of the test cases.

1. Export the test cases in to TestDirector. (It will contain total 8 steps).

2. The test cases will be loaded in the Test Plan module.

3. Once the execution is started, we move the test cases from Test Plan tab to the Test Lab module.

4. In Test Lab, we execute the test cases and put as pass or fail or incomplete. We generate the graphs in the test lab for daily report and sent to the onsite (where ever you want to deliver).

5. If you got any defects and raise the defect in the defect module. When raising the defect ,attach the defect with the screen shot.

10). Difference between WEBINSPECT-QAINSPECT and WINRUNNER/TESTDIRECTOR

HI, QAInspect finds and prioritizes the security vulnerabilities in an entire Web application or in specific usage scenarios during testing and presents detailed information and remediation advice about each vulnerability.WebInspect ensures the security of your most critical information by identifying known and unknown vulnerabilities within the Web application layer. With WebInspect, auditors, compliance officers, and security experts can perform security assessments on a Web-enabled application or Web service.WebInspect enables users to perform security assessments for any Web application or Web service, including the industry leading application platforms.

11). How can we add requirements to test cases in TestDirector?

Just we can use the option of add requirement.

two kind of requirement's is available in TD.

1. Parent requirement

2. child requirement

Parent requirement is nothing but Title of the requirements its covers hign level functions of the requirements.

Child requirement is nothing but sub - Title of the requirents its covers low level functions of the requirement.

12). How many types of tabs in TestDirector?

There are 4 tabs available in TestDirector.

1. Requirement

2.Testplan

3.Testlab

4.Defect

We can change the name of Tabs, as we like. TestDirector enables us to rename the Tab . In TD there only 4 tabs are available.

13). How to generate the graphs in TestDirector

The generation of graphs in the TestDirector that to Test Lab module is :

1. Analysis

2. Graph

3. Graph Wizard

4.Select the graph type as Summary and click the Next button.

5.Select the show current tests and click the next button.

6.Select the Define a new filter and click the Filter button.

7. Select the tests set and click the Ok button.

8.Select the Plan: subject and click the ok button.

9. Select the Plan: Status

10 Select the test set as x- Axis

11. Click the Finish button.

14). How can we export multiple test cases from TD in a single go?

Export to where? you didn’t specify that. Anyway, if you want it to any office tools

1. Select the multiple steps / cases you need

2. Right click -> save as and proceed

15). In Quality center, when you email a defect, the subject line for the email is shown as the project name + defect ID. How can we customize this subject line to be the SUMMARY of the defect? (Not for each individual defect but I would like to customize the template so that I can use it for all the defect the I email)

You can always change the subject line by deleting the default subject and typing your own .

16). How to upload test cases from excel into Tes Director(steps to upload)?

First we have to activate Excel in TestDirector with the help of add-in page.

after activation, we can view the option as 'export to TestDirector' in Excel under tools tab.

if you select the 'export to TestDrector' .. Pop-Up dialogue box open ...here after 8 steps to further process that is 

Enter

1. URL of TestDirector.

2.Domain name and Project Name

3. user name and Password

4.select any one of this 3 options:  requirement or testcase or defects

5.Select a Map option. a. Select a map    b.  Select a new map name   c.  Create a temporary map

6. Map the TD to Corresponding Excel. Map the field whatever you mentioned in Excel.

7 & 8 are not required; TD as process and exported successfully will show those pages. 

These are the required steps to export the excel into TD

17). Does anyone know of any integration between TestDirector and Rational Robot? Any ideas on the technical feasibility of such an integration and level of effort would also be interesting

TestDirector is software management tool. It is a Mercury Interactive product.

Rational Robot is a Rational product. It comes included with "TestManager" module for software management.

Integrating TestDirector and Rational robot is not feasible.

18). Explain the project tree in TestDirector.

Project tree in TD: Planning Tests Create Tests Execute Tests Tracking Defects

19). What is Coverage status, what does it do?

Coverage status is percentage of testing covered at a given time. If you have 100 test cases in a project and you have finished 40 test cases out of total  test cases than your coverage status of the project is 40%.Coverage status is used to keep track of project accomplishment to meet the final deadline of the deliverables. 

20). 1.How many types WinRunner can learn the objects?

2.Use of filters in TestDirector?

3.Difference between Data validation, Data integrity?

4.How many types of reports can be generated using TestDirector?

Diff b/n data validation and data integrity.... In data validation we check whether the input data/output data is valid in terms of its length, data type etc

In data integrity we check out database contains only accurate data. We implement data integrity with the help of integrity constraints. So in data integrity we check whether the integrity constraints are implemented correctly or not.

Use Of Filters in TestDirector:

Limits the data displayed according to the criteria you specify. For example, you could create a filter that displays only those tests associated with the specified tester and subject.

Each filter contains multiple filter conditions. These are expressions that are applied to a test plan tree or to the fields in a grid. When the filter is applied, only those records that meet the filter conditions are displayed.

You create and edit filters with the Filter dialog box. This opens when you select the Filter command. Once you have created a filter, you can save it for future use.

21). How to configure an Informatics workflow in TestDirector

are you trying to set the test execution sequence...

22). How do we generates testcase thru TESTDIRECTOR?

Test Lab we do the design step. Design grid we create parent child tree.

Ex. Database Operation (Test name)

1). Delete an order

Description: click delete button to delete the order

Expected result: order is deleted

Pass/fail:

2). Update an order

3). Create an order

23). What is the difference between Master test plan and test plan?

Master plan is the document showing the planning for whole of the project i.e. all the phases of the project whereas the test plan is the document required for only testing people.

24). What is the main purpose of storing requirements in TestDirector?

In TestDirector(Requirement Tab) We Stores Our Project Requirement documents according to our modules or functionality of the applications. This helps us to makes sores that all requirements are covered when we trace developed Test Case/Test Script to the requirements.

This helps QA Manager to review what extent the requirements are covered.

25). What are the 3 views and what is the purpose of each view?

The 3 views of requirement are:

  1) Document View-tabulated view

2) Coverage View-establish a relationship between requirement and the test associated with them along with their execution status. Mostly the requirements are written in this view only.

3) Coverage analysis view-show a chart with requirement associated with the test, and execution status of the test.

26). How many types of reports can be generated using TestDirector?

Reports on TestDirector display information about test requirements, the test plan, test runs, and defect tracking. Reports can be generated from each TestDirector module using the default settings, or you can customize them. When customizing a report, you can apply filters and sort conditions, and determine the layout of the fields in the report. You can further customize the report by adding sub-reports. You can save the settings of your reports as favorite views and reload them as needed.

Reports available in Requirements Module:

Standard Requirements:

Tabular

Requirements with Coverage tests

Requirements with coverage tests and steps

Requirements with associated defects

Reports available in Test plan module:

Standard test planning

Subject tree

Tests with design steps

Tests with covered requirements

Tests with associated defects

Reports available in test Lab module:

Current test set

Cross test set

Cross test set with tests

Current test set with failed test runs

Cross test set with failed test runs

Reports available in Defects module:

Standard defects

Tabular defects

Defects with associated tests and runs

Fixed or rejected defects

Fixed or rejected defects detected by current user

Opened defects and assigned to current user

27). What does Test Grid contains?

The Test Grid displays all the tests in a TestDirector project. 

The Test Grid contains the following key elements:

Test Grid toolbar, with buttons of commands commonly used when creating and modifying the Test Grid. 

Grid filter, displaying the filter that is currently applied to a column.

Description tab, displaying a description of the selected test in the Test Grid.

History tab, displaying the changes made to a test. For each change, the grid displays the field name, date of the change, name of the person who made the change, and the new value.

28). How will you generate the defect ID in TestDirector? Is it generated automatically or not?

When you 'll add a new defect and submit it to TestDirector it will generate a new ID for that defect.

The defect ID automatically generate after we click Submit button.

29). Difference between TD 8.0(TestDirector) and QC 8.0 (Quality Center).

|RE: Difference between TD 8.0(TestDirector) and QC 8.... |

|[pic] |

|  |

|The following table highlights the main differences between TestDirector 8.0 and Quality Center 8.0. |

|Subject |

|TestDirector 8.0 |

|Quality Center 8.0 |

| |

|Technology |

|C++ |

|IIS |

|COM |

|Back-end is Java based |

|Runs on application servers |

| |

|Operating Systems |

|Microsoft Windows |

|Microsoft Windows |

|Red Hat Linux |

|Solaris |

| |

|Clustering |

|Single server only |

|Full clustering support |

| |

|Database Connectivity |

|Requires database client installation |

|ADO interface |

|Does not require database client installation |

|Direct access to a database server using a JDBC type 4 driver |

| |

|Repository |

|Domain repository (TD_Dir). |

|Repository divided into two subdirectories: |

|QC directory for default and user-defined domains. |

|SA directory for Site Administrator data |

| |

|Virtual Directory |

|Virtual directory name is tdbin |

|Quality Center server virtual directory name is qcbin. |

|Site Administrator server virtual directory name is sabin. |

| |

|Supported Databases |

|Microsoft Access |

|Microsoft SQL |

|Oracle |

|Sybase |

|Microsoft SQL |

|Oracle |

| |

|Site Administrator Data (domains, projects, and users) |

|Data stored in the doms.mdb file |

|Data stored in the Site Administrator schema on a database server |

| |

|Common Settings |

|Data stored in the file system |

|Data stored in the database |

| |

|User Authentication |

|Windows authentication |

|LDAP authentication |

| |

| |

30). How do you ensure that there are duplication of bugs

in TestDirector?

In the defect-tracking window, at the top we can find the find similar defect icon, if we click after writing our defect, if any of the tester already added the similar defect it will tell. Else we can add.

31). How you integrated your automated scripts from TestDirector?

When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT.

QTP (a)

What are the Features & Benefits of Quick Test pro (QTP)..?

Operates stand-alone, or integrated into Mercury Business process Testing and Mercury Quality C enter. Introduces next-generation “Zero-configuration” Keyboard Driven testing technologies in Quick Test Professional 8.0 – allowing for fast test creation, easier.

1. Differences between QTP 6.5 and 8.2 what are extra features in 8.2?

Multimedia Add-in is available in QTP 6.5. Multimedia Add-in is not available in QTP 8.2. Parameterization is the extra Features in QTP 8.2. compared to QTP 6.5.

3. How to handle the exceptions using recovery scenario manager in QTP?

There are 4 trigger events during which a recovery scenario should be activated. They are A pop up window appears in an opened application during the test run. A property of an object changes its state or value.

4.What is the use if Text output value in QTP?

Output values enable to view the values that the application takes during run time. When parameterized, the values change for each iteration. Thus by creating output values, we van capture the values an object in runtime.

5. How to use the object spy in QTP 8.0?

To view the Run time objects and Test objects properties and Methods of an object.

6. What is the extension of the object repository files in QTP?

Two types of Object repository there are shared repository (extension .tsr) and Action repository(extension .mtr)

7. Explain the concept of object repository & how to QTP recognizes objects?

With QTP 8.2, there available QTP plus, setup. It provides Repositories Merge Utility. The Object Repository Merge Utility enables user to merge Object repository files into a single Object repository file. To recognizes a object using properties.

8. What are the properties you would use for identifying a browse & page when using descriptive programming?

For Browser we will have to user title, html id property to identify the browser and web page.

9. What are the scripting language you could use when working with QTP?

QTP supports vb scripts scripting language.

10. Give me an example where you have used a COM interface In your QTP project?

Com interface appears In the scenerio of front end and backend foreign in you are using oracle as back end and front end as VB or any language then for better compatibility we will go for an interface of which COM will be one amoung those interfaces.

11. Few basic questions on commonly used Excel VBA functions.

Common functions are: Creatinf a sheets and assigning values to sheets and coloring the cell Auto fit cell setting navigation from link in one cell to other saving.

12. Explain the keyboard create object with an example.

Create object: Creates and returns a reference to an Dynamic object. Example:

Ser Excel sheet = Create Object(“Excel.Sheet”)

13. Explain in brief about the QTP Automation Object Model.

To pre-configure test settings before executing the QTP test.

14. How to handle Dynamic Object in QTP?

User GETRO property we will handle the runtime objects.

15. Where can I find the Run-time data table?

In the result window shows a run time data table. It includes the table-shaped icon that display the run-time Data Table – a table that shows the values used to run a test containing Data Table parameters or the Data Table output values retrieved from a test while application execution time.

16. How to do Data-Driving in QTP?

By parameterizing the test script.

17. What are the differences between call to Action and Copy Action?

When you inset a call to action, they are ready only in the calling test. It can be modified in the original test. Where as come to copy action, you can make changes to the copied action, your changes will not effect the original action where it created.

18. Discuss QTP Environment.

QuickTestProficinal environment using the graphical interface and Action Screen technologies – A testing process for creating test scripts, relating manual test requirements to automated verification features – Data driving to use several sets of data using.

19. Explain the concept of how QTP identifies object.

During tecording QTP looks at the objects and stories it as test objects. For each object QT earns a set of default properties called mandatory properties, and look at the rest of the objects to check whether this properties are enough to uniquely identify.

20. Differentiate the two Object Repository Types of QTP.

In QTP there are 2 object repositories, they are 1.Shared Object Repository 2. Per Action Mode, by default it’s per action mode. We want to maintain the Object Repository per each action separately. Select a per action mode.

21. What are the differences are and best practical application of each.

Per action: For Each Action, one Object Repository is created. Shared: One Object Repository is used by entire application.

22. Explain what the differences between Shared Repository and Per_Action

Repository.

In shared repository, one object is used in more than one action and in per action repository, everytime in every, objects are stored differently and are not shared.

23. Have you ever written a complied module? If yes tell me about some of the functions that you wrote.

I used the functions for capturing the dynamic data during runtime. Function used for Capturing Desktop, browser and pages.

24. What projects have you used WinRunner on? Tell me about some of the challenges that arose and how you handled them.

WR fails to identify the objects in GUI. If there is a non STD window OBK WR cannot recognize it, we use GUI SPY for that to handle such situation.

25. Can you do more than just capture and playback?

Yes you can do more than capture/playback. Descriptive programming is the answer to this question. We can write scripts without recording and it would still work fine.

26. How long have you used the product?

Depending up on the Licence.

27. How to do the scripting. Is there any inbuilt function in QTP?

Yes, there’s an in-built functionality called “step Generator” in Insert>Step>step Generator – F7, which will generate the scripts as you enter the appropriate steps.

28. What are the differences between check point output value.

An output value is a value retrieved during the run session and entered into runtime table or data table subsequently it can be used as input value in your test.

29. If we use batch testing. The result shown for last action only. In that how can I get result for every action?

Click in the Expend icon in the tree view to view the result of every action.

30. How the exception handling can be done using QTP?

Recovery scenerio manager provides a wizard that guides you though the defining recovery scenerio. Recovery scenerio has three steps 1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run.

31. How do you test siebel application using QTP?

In SWE section you need to add Automation Enable = True and at the same time you need to use SWE Cmd = Auto On in the URL.

32. How many types of Actions are there in QTP?

QTP supports three types of Actions. 1). Non Reusable actions 2). Re-usable Actions 3). External actions.

33. How do you data drive an external spreadsheet?

Import from External Spreadsheet File by selecting Import then From File. Which imports a tabbed text file or a single sheet from an existing Microsoft Excel file into the table? The sheet you Import replace all data in the currently selected.

34. I want to open a Notepad window without recording a test and I do not want to use System Util. Run command as well how do I do this?

Another alternative to open a notepad is to use Shell Object. Check out with the following example: Dim a

Set a = Wscript.CreateObject(“Wscript.shell”)

a.run “notepad.exe”.

35. how many types of recording modes in QTP? Describe each type with an example where we use them?

3 types of recording modes in QTP.1.normal 2.analog mode 3. low level recording mode.

36. How can we do the Frame work in QTP?

Depending upon the project and client requirements.

37. Testing>QTP which features of QTP would you like to improve? How would you go about implementing it?

We are not implementing any concept in QTP.

38. explain as to how would you design the driver code for a keyboard based test script.

Test script prepare in QTP. Key words prepare in a Excel sheet and objects description prepare in notepad.

39. The file extension of Shared object repository.

The Share object repository is .tsr extension.

40. How to handle java tree in QTP?

First of all we need to handle a java add-in to handle a java tree. In tools we have the “object identification” drop down list. There we have the java option to recognize the objects there select the tree option. Add the properties.

41. How to fetch test data from Database by using QTP?

In order to fetch test data from Database we have to create a adodb connection object to connect with data base. The syntax is…>CreateObject(“Adodb.connection”) .

42. What is the procedure to test flash application using QTP?

Using Multimedia Add-ins Support.

43. If a error occur during the execution of QTP script. How can we handle it.

Using Recovery Scenerio manager.

44. How to merge the object repository files.

Using a repository merge tool. It is available with QTplus tool.

45. Can we update the database though QTP.

Yes. We can update.

46. How can we pass parameters in a actions?

Using Input and output parameters.

47. How can you write a script without using a GUI in QTP?

Object repository? Without OR, tester need to write descriptive tests, where you would directly assign property values and write methods.

48. How to load the *.vbs or test generating script in a new machine?

By using Test->setting->resources->libraries option.

49. How did you add run-time parameter to a datasheet?

Using dt sheet object.

50. What is descriptive programming?

Executing the test script with out object repository.

51. How you write scripts in QTP?

Using vbscript.

52. How to instruct QTP to display errors and there description in the test results instead of halting execution by throwing error in the mid of execution due to an error (for example Object not found)?

By using err object this object value passes to result window.

53. What is the descriptive programming? What is the use of descriptive programming?

Test Execution faster.

54. How will you test a stapler?

Using user acceptance test.

55. QTP supports Batch test?

Yes, Its supported by using test batch runner.

56.Describe the last project scenario and generate test cases for it?

Based on the last project functionality we write test cases for it.

57. If there are a lot of bugs to be fixed, which one would, you resolve first.

As a tester.. We don’t fix bugs.. we only find bugs As a developer.. Bugs are to be fixed basing on the property and severity of bugs. Sometimes even the severity is low. But if attaches to major functionality, then priority will be more to fix that bug..

58. What would be strategy to fix bugs in an unknown piece of code?

R&D works means basically we can stay it is a verification process. Before you implementing some logic into your function you have to review those to find is there any errors which can be cleared by going through that piece of logic we are going.

59. Define Virtual Object?

Virtual Object enable you to record and run tests on objects that are not normally recognized by Quick Test. We can teach Quick Test to recognize any area of your application as an object by defining it as a virtual object. Some times QTP may not recognize some objects. In this kind of Situations Using New Virtual Object. We can convert Cutom Objects into standard Objects.

60. What are Main panes available in QTP Test Browser?

Test pane-Contains the Tree View and Expert view tabs

Test Details Plan-Contains the Action screen

Debug Viewer pane-Assists you in debugging your test. The Debug Viewer pane contains the Watch

Expressions, Variables, and Command tabs. (The Debug Viewer pane is not displayed when you open 103)

61. In what occasion we can specify Global sheet and Action Sheet?

We store data in the Global tab when we want it to be available to all actions in our test, for example, to pass parameters from one action to another.

62.Difference between Text and Test Area Checkpoints?

Text Checkpoint-Enables you to check that the text is displayed in a screen, window, or Web page, according to specified criteria. It is supported for all environments.

Teat Area Checkpoint- Enables you to check that a text string appears within a defined area in a W application, according to specified criteria. It is supported for standard windows, Visual Basic, and A environments,

63. What ate different types of exceptions ?

Four types of Exceptions are there 1). Pop-up exception 2). Object State exception 3). Test Run error exceptions 4). Application Crash exceptions.

64. What are the key elements available in Test Results Window?

Test Results File bar, Test Results bar, Test result tree, Test result details, Status.

65. With what extension you can save the list of tests in a file to run in Test Batch Runner?

.MTB

66. What are Main panes available in QTP Test Browser?

Test pane (Tree & Expert), Test Details pane (Active Screen), Date table, Debug viewer pane.

67. How many tabs are available to view your test in a Test pane and what are they?

Two these are Tree & Expert.

68. What are the 3 mains stages involved in Testing with QTP?

Creating Tests, Running Tests, Analyzing Tests.

69. Write a Function to capture the Pop-up’s?

Here I am writing steps to handle pop-up Exceptions.

1. Select the Recovery scenario manager.

2. Press the new scenario

3. Click next

4. Select the pop-up exception.

5. Select the pop-window which we want to handle (capture) by clicking the spy button.

6. Press next.

7. Select the specified option like keyboard or mouse operation press next.

8. Select the specified option click default or press enter.

9. Click next and uncheck add another recovery operation.

10. Click next and select proceed next step.

11. Click next and give the scenario name and description.

12. Click next and select add scenario to current test and add scenario to default setting. Click finish Afterward save that scenario.

70. What is meant by hot keys?

A hot keys or a combination of keys on a computer keyboard when pressed performs a task. The specific task performed by a particular hot key varies by operating system or application. However, there are commonly _ used hot keys.

71. For a triangle (sum of two sides is greater than or equal to the third side), what is the minimal number of test cases required.

Generally, we will calculate the number of test cases that depends on the particular module and its complexity. Minimum number of test=(number of outputs) multiply (1.6) (approx calculation)

72. What are the flaws in water fall model and how to overcome it?

Since testing comes at the last stage, there are huge chances of defect multiplication, defects will be migrated to every stage wastage of human resources and time delays are also introduced.

73. How does you test a web link which is changing dynamically?

This could be tested through the automated test tools like rational robot and WinRunner.

74.What is system testing and what are the different types of tests you perform in system testing?

System testing is a type of Block box testing means testing the application. After the integration testing, usually will do testing. Functionality, Regression, and performance testing comes under system testing.

75. How do we know about the build we are going to test? Where do you see this?

In test plat we are going to have all the details about who should test which tests in a team which is given by team leader. According to that the entire group will do their testing.

76. What did u as a team leader?

The roles of a lead 1) before the project gets started, will conduct one team meeting and discuss briefly about the upcoming project. 2) will distribute the work among the team member and let them know which part of the application they are going to test3)

77. What test you perform mostly? Regression or retesting in your testing process?

Retesting is the repeated execution of the test case which results in a fault, with the aim that fault has been cured, Regression testing is the renewed testing of already tested program or part after modification with the aim that the modified had.

78. Without using GUI map editor? We can recognize the application in WinRunner?

Without Using GUI map Editor, We can Recognize the Application objects using a Descriptive programming.

QTP (b)

1). How to create the dynamic object repository in QTP?

Property values of objects of applications may change dynamically each time your application opens, or based on certain conditions, to make the test object property values match the property values run time object, we can modify test object properties manually while design the test or component or using 'SET TO PROPERTY" Statements during run session.

2). What is difference between global sheet and action sheet?

A Test is comprised of 1 or more actions. The Global sheet can be accessed by all the Actions in the Test. An action sheet is local to the particular action. The action sheet is named after the name of the action.

The default behavior of the global sheet is that the test iterates for all the rows in the Global sheet. If there are 10 rows of data in the global sheet...the test iterates for 10 times.

The default behavior of the Action sheet is 1st iteration no matter how many rows are available. This default behavior can be modified by right clicking on the action and modifying the action's properties.

3). How to pass parameters from one action to another action.

You can store the variable you want to pass as Environment Variable in One action and then if you need to access the same variable in another action first you have to call that environment variable in second action, store it in some variable and use it...

4). How to perform cross platform testing and cross browser testing using QTP? Explain giving some example?

Cross Platform Testing:

There is a provision of getting the Operating system in QTP by using the Built in Environment. Eg. Platform = Environment ("OS"). Then based on the Platform you need to call the actions, which you recorded on that particular platform.

Cross Browser Testing:

First get the type of browser you are using: Eg. Browser ("Core Values").GetROProperty("version")

This will give you Internet explorer 6 or Netscape 5. Based on this value you call the actions, which are relevant to that browser.

5). What conditions we go for Reusable Scripts and how we create and call that into another script?

When we are going to call the action to another actions we are going to set the calling action as reusable. By setting this then only you are going reuse some other action. Otherwise you cannot call that action.

Ex: - Login

          We can record the script for one action and set in the action Properties as Reusable. Whenever you need Login script you can call that Reusable Action.

6). Is it possible to test a web application (java) with WinRunner?

Otherwise is it possible to check with QTP?

Which one is best?

We find some difficulties, when we use WinRunner to test with the Java based web application,

But at the same time we don’t find any problem in using the QTP to test the java based web application

These are some of the web application supported by QTP: XML, HTTP, WSDL, SAP, J2EE, .net

So what i would suggest is to go for QTP to test the Java based we application,

I've heard that a java web application can be tested by WinRunner but is not flexible. It's recommended to use QTP tool for more reliability.

7). Can we mask a Code In .vbs file so that it is not viewable to others?

VBS file is visible in notepad only not in QTP. We can  add .vbs files in QTP, test menu->setting->Resource Tab.

You also try to open .vbs file in QTP if it's open then you only get your questions answer.

8). Is there any function to double click a particular row in a web table?

There will be a function by the name activate. use that function. required argument is the row number.

9). What is the use of command tab in Debug viewer? Can we execute any user-defined queries?

Step Into

Choose Debug > Step Into, click the Step Into button, or press F11 to run

only the current line of the active test or component. If the current line of

the active test or component calls another action or a function, the called

action/function is displayed in the Quick Test window, and the test or

Component pauses at the first line of the called action/function.

Step Out

Choose Debug > Step Out or click the Step Out button, or press SHIFT+F11

Only after using Step Into to enter a action or a user-defined function. Step

Out runs to the end of the called action or user-defined function, then

Returns to the calling action and pauses the run session.

Step Over

Choose Debug > Step Over or click the Step Over button, or press F10 to

Run only the current step in the active test or component. When the current

Step calls another action or a user-defined function, the called action or

Function is executed in its entirety, but the called action script is not

Displayed in the Quick Test window.

10.).What is database checkpoint?

Database check helps to check database used is correct or not.

e.g. : flight reservation : if you right click on the information table of flight and say add database check point you would get check point properties ,there select the data base icon which looks like excel sheet and click ok. Check point is added, if you want to add parameterization click on the icon in the value cell and add parameters, data table will be highlighted, add the places you want to check and run the test... if successful, means dbcheck point is correct.

11). What is the use of function and sub function in QTP?

Function and Sub Function are similar.  Only the difference is Sub Function will not return any thing to the function.  But using Function it returns result to the function.

 

We can use function wherever we need it.

We can do parameterization and also reducing the lines of code, etc...

 

12). What is the new Version of QTP which is recently released in the Market?

CURRENTLY QTP 9.0 BETA VERSION IS IN MARKET.NOW MOST OF THE COMPANIES ARE WORKING ON QTP 8.2 VERSION.

13). How to call a function present in dll file in QTP Script.

Execute File "Path of the .vbs file "

14). I have n iterations of test run in QTP. I want to see the results of not only the latest (‘n’th) iteration but also all the previous iterations (1to ‘[n-1]’th)

While Executing the QTP script, select "New Run Results folder" option from the Run dialog box and select the path to save the result file.

So that, we can verify previous iterations results also.

15). How to call from one action to another action in QTP?

Using "Call to Existing Action" we can call one action from another action.

Go to insert ->select Call to Existing Action

Select which action wants to call.

16). What is the Difference Between Bit map Check point & Image Check point Please explain in detail

For Bit map Check point we need not have any image, It goes by the screen area.

But for image checkpoint we need to have an image.

17) How to get the column count and column name from result set in database connection program?

I am giving examples so that u can under sand easily.

1). Example for record count

Set dbconn=create object ("adodb.connection")

dbconn.open "dsn=xxx"

Set objRecordset=create object ("ADODB.Recordset")

 objRecordset.open "select * from login table", dbconn, 1,3

   Msgbox objRecordset.recordcount

2). For field names

For each x in objRecordset.fields

 Msgbox (x.name)

Next

18) How to identify a 'web element' class object while recording and running in 'Event' mode of settings. I’m able to run either a mouse over operation or an event but not both as per the requirement...

Add html tag and inner text properties to the web element.  Remove all other properties to that object in object repository.

19) What are the new features available in QTP 8.2 compared with earlier versions?

Setting parameters to the actions step->action properties->parameters tab here we can set the input and output parameters.

20) What is difference between window (" ") and dialog (" ") in QTP while creating script?

Window is a separate window that appears on clicking on a particular link on which we can perform actions like edit the content etc. This will contain many objects like chkbox, edit box, combo box etc. Developers to validate a field use dialog box. It will contain an alert message and OK or/and Cancel buttons.

21) How do you retrieve the Class name of a Test Object programmatically from within a script?

Using GetROProperty () function

22) Testing Mainframe applications.

First we have install Terminal Emulator add-in.  Then try to identify the objects.

The objects will be added to the object Repository, but these objects will be identified depends the X, Y co-ordinates of the objects.

23) Difference Between text and Text area checkpoints in QTP

Text checkpoint both can check the whole string which u can do by using standard checkpoint. Difference is that if u wants to validate that a particular string has to come within a defined area then use text area checkpoint. Use text checkpoint when the position of text is critical. Assume u have a text box in one screen that displays the name entered in previous screen. u will use text area checkpoint to validate that name appears within the text box. u will use text checkpoint in this example to see that surname appears first(assuming some format specified wherein surname should be first).

24) What are different execution modes available in QTP & explain them.

There is 3-execution mode in QTP they are

1. Normal - the default mode where expected & actual results are verified & output is given.

2. Update - When u want to update the expected result then update mode is used.

3. Debug- this requires Microsoft debugger..

25) How can we recognize objects in Flex application using QTP?

If you are recording in the low level mode. In this mode every object is identified as under only 2 classes.

a) Win Object b) Windows

26) How to write QTP test results to an Excel application

Set Excel sheet=Create Object (Excel. sheet)

Excelsheet.Visible=True

Excelsheet.Activesheet.Cells (1,1). Value=Variable

Excelsheet.Saveas"C: C.xls"

In addition to the creation of the excel object, you can use the ADO concept as well as the inbuilt data export function of QTP.

1) ADO: In this, you have to create one DSN through ODBC for the Microsoft Excel *.xls driver. Using this defines Adodb.Connection and Record set for the required excel which you want to update. Then simply you can update any column using Recordset.Updte concept similar in VB

2) DataTable.ExportSheet "C: name.xls”, 1

27) How to write recovery scenario for below questions and what are the steps we will follow? If I click save button. It is not saving I want to write recovery scenario how?

To write Recovery scenario u must 1st stop recording when u get an error then from Tools menu-> select Recover Scenario Manager

When Recover Scenario Manager window is opened u can follow the steps as they r descriptive & select the appropriate option for the type of error u receive, u have to select what action is to be performed if this particular error occurs after completing the entire procedure u need to save the scenario as well as the file.

Then thru Test-> Setting-> Recovery option attach the file to the recording & u do have a choice of whether to activate on every step or etc etc.

Thus u can use Recovery scenario to handle exceptions...

28) After importing external .xls datasheet into Data table of QTP, How to set NO of iterations run for all row in the Data table? I mean we dot know How many rows are existed before importing the .xls sheet

FOR i = 1 to datatable.getsheet ("sheet name"). Getrowcount.getrowcount command will read up to the last record

29) How to test Dynamic web pages using QTP

If you know the one property of the object. That is sufficient for the QTP to identify the object. if u know the one object property. at that time we need to use object identifier to identikit object we need to add some more properties of the object. To give unique identification for the object. Or we need to go for the descriptive programming, it bypass the object repository

30) How do we connect to Oracle database from QTP?

Name: Open Connection

'Function: Opens a connection to the required database

'Input Parameter:  strDriver, strServer, strUserID, strPassword

' strDriver => Name of Database Driver to be used

' strServer => Database Server address

' strUserID => User ID to connect to database

' strPassword => Password to connect to database

'Return Value: con => Object: Connection to database

' NULL if it fails

'Calls: None [Calls no other user defined function]

'---------------------------------------------------------------------------------------

Function Open Connection(byval strDriver,byval strServer,byval strUserID,byval strPassword)

Dim strConnectionString

 strConnectionString = "DRIVER=" & strDriver & "; SERVER=" & strServer & "; UID=" & strUserID & "; PWD=" & strPassword

 set con = create object ("adodb.connection")

 con.open strConnectionString

 set Open Connection = con

End Function

31) How to configure the environment variables in QTP and how to use the variables, with example?

Setting the Environment Variable

VARIABLE=Environment. Value ("XXXXXXX")

Retrieving the Environment Variable

Environment. Value ("XXXXXXX")=VARIABLE/CONSTANT

32) What is the process for creating an automated test script using QTP assuming you have reviewed the manual test case and understand the requirements?

Transferring your testing from Manual to Automation depends upon lot of factors. Depending upon these factors u decide the 'Framework'. Framework is nothing but the approach by which u implement QTP to your project. There is various type of framework available.... Data, Library, Keyword Driven to name few.

33) What is the method used to focus on particular field. I need the script.

I will give example.

In flight application login page by default the focus is on username field. for this i will check the focus was there or not by using GetRoproperty method but if focus is not there which method i need to use. Getroproperty: retrieves the particular property.

Getroproperty: set the property to particular object.... 

to focus to the perticulat text we can use this method:

Object. GetTextLocation (TextToFind, Left, Top, Right, Bottom, [MatchWholeWordOnly])

Object. GetTextLocation (TextToFind, Left, Top, Right, Bottom, [MatchWholeWordOnly]

34) 1) what is the advantage and disadvantage of using Virtual Object wizard?

2) How efficiently we can use Virtual object Wizard in QTP?

Virtual object Wizard we can map Custom Objects (objects which are not recognized by QTP) to Standard Class.

This help to insert checkpoints and for doing DDT for that objects

35) Can any body explain me the differences between a reusable and a external action with example?

Reusable action: An action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests....

External action: A reusable action stored in another test...

36) How to write QTP test results to an Excel application?

Dim fso, f1

Set fso = Create Object ("Scripting.FileSystemObject")

Set f1 = fso. CreateTextFile ("D:\testfile.xls", True)

And Then use If condition's

To store the results

If Browser ("abc"). Page("xyz").Exist Then

 f1.writeline("Step1 Login details  is pass")

 else

 f1.writeline("Step1  Login details is Fail")

 End If

37) How good is QTP for testing siebel applications? Whether QTP recognizes siebel objects?

QTP supports siebel Application. It is the best Automated tool, which supports SIEBEL Application.

38) The concept of checkpoint declaration in the QTP mainly for the Objects, Pages, Text and Tables?

Check point declaration is basically compare two values and give the result..

Suppose For Standard checkpoint I have given Text for username should be Lena’s if any other text appears in that field, suppose Mona appeared in that field then checkpoint fails to validate and you get result as test case failed for that checkpoint..

Basically insertion of checkpoint to get expected output, and should know where test case goes wrong.

In Standard checkpoint one can set height/width/location of X-y co-ordinates of object /text etc.

39) What is Expert view in QTP? Can you explain with example?

n QTP we have two script views:

• Expert view

• Keyword view

Expert View: Displays the action performed during recording in script view, i.e. VB Script

Keyword view: Displays the action performed during recording in terms of objects.

40) What is the best way to do regression testing using QTP?

Regression testing is not tool dependent.

Firstly there should a regression test strategy, based on this strategy you can group functional test cases to form regression test suites.

Then these test suites are automated based on regression group

41) What are the use functions in QTP? Public, private

Public:  In any testing tool not only the QTP if we declare as a Public then we can use that variable to any function within the script

Private: we can use that variable within the function

42) How can we return values from user-defined function? provide code with small example .

1 Arguments

Public

Indicates that the Function procedure is accessible to all other procedures in all scripts.

Default

Used only with the Public keyword in a Class block to indicate that the Function procedure is the default method for the class. An error occurs if more than one Default procedure is specified in a class.

Private

Indicates that the Function procedure is accessible only to other procedures in the script where it is declared or if the function is a member of a class, and that the Function procedure is accessible only to other procedures in that class.

Name

Name of the Function; follows standard variable naming conventions.

Arglist

List of variables representing arguments that are passed to the Function procedure when it is called. Commas separate multiple variables.

Statements

Any group of statements to be executed within the body of the Function procedure.

Expression

Return value of the Function.

Note: So, to return a value, assign the return value to the function name.

For Eg:

Function Binary Search (. . .)

      . . .

      ' Value not found. Return a value of False.

      If lower > upper Then

 Binary Search = False 

 Exit Function

 End If

      . . .

End Function

43) Without recording objects in Object Repository are we able to run scripts?

Yes, We can do that by using Descriptive Programming.

44) What is Limitation of QTP?

• must know VBScript in order to program at all.

• must be able to program in VBScript in order to implement the real advance testing tasks and to handle dynamic situations.

45) What is descriptive programming?

Descriptive program we can work with the objects, which are not there in Object Repository. If we use descriptive program we can reduce the size of object repository. so QTP performance also increases.

46) What are the disadvantages or drawbacks in QTP?

1) QTP takes very long to open huge tests. Also CPU utilization becomes 100% in that case.

2) QTP scripts are heavy as it stores all the html files (for active screen) as well.

3) Block commenting is not provided till 8.2 version

47) What is QTP framework? When we prepare the data for data driven testing?

QTP framework is like Test Plan for automation using QTP. It contains information regarding naming conventions, variables, data for DDT, about the OR path, how the scripts should be commented and what is the purpose of each script etc..

48) You are trying to test a web-based application. You invoked QTP from TD to test the web application. While the TD itself runs on a browser, how do you ensure that these two browsers do not clash?

TestDirector is management tool; it would be used to invoke QTP or WinRunner scripts one after another. So before we open application (whatever may be the application like web based or stand alone application) we should open TD so there is no clash between TD and web based application.

49) What are environment variables in qtp?

The Environmental Variables in QTP are 1) User-Defined 2) Built-in

50) What is keyword driven testing please give process one example.

What is the different between data driven and keyword driven testing?

Keyword Driven: In the Keyword Driven testing, user can defined screen objects. Suppose, user is adding page title, label name. When scripts are running...these objects will be verify with the scripts.

Data-driven scripts: In Data-driven testing, user is giving set number of values and replace these columns name with the declared one in the data driven database.

51) How many number of actions possible in QTP?

There are 3 types of actions available in QTP

1. Reusable

2.non-reuseable

3. Extern Action

52) How to change the Object Repository Mode using CODE? i.e. from Shared to Per Action.

Set qApp = CreateObject ("QuickTest.Application")

  qApp.Test.Settings.Resources.ObjectRepositorytype = "Per Action"

  Set qApp = Nothing

53) How can I implement error handling in QTP and How to recall a function in QTP?

Through exception handling we can implement error handle.In QTP we have four types of exception handles.

we can call the function as below

function functionname (parameter1, parameter2,....)

end function

QTP (c)

1. How Does Run time data (parameterization) is handle in QTP?

A). You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality or Excel, to manipulate data sets and cteate multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files.

2. What is keyboard view and Expert view in QTP?

A). QuickTest’s keyword Driven approach, tet automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the keyword view.

Advanced testers can view and edit their tests in the Expert View, which reveals the underlying industry-standard VBScript that QTP aucomatically generates. Any changes made in the Expert View are automatically synchronized with the keyword View.

3. Explain about the test Fusion Report of QTP?

A). Once a tester has run a test, a Test Fusion report displays all aspects of the test run a high-level results overview, an expandable Tree View of the test specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure, By combining Test Fusion reports with QTP, you can share reports across an entire QA and development team.

4. To which environments does QTP supports?

A). QTP supports functional testing of all enterprise environments, including Windows, Web, .Net, Java/J2EE, SAP, Siebel, Oracle, PeopleSoft, Visual Basic, ActiveX, Mainframe terminal emulators, and Web services.

5. What is QTP?

A). QuickTest is a graphical interface record-playback automation tool. It is a able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, QTP also enables you to test java applets and applications and multimedia objects on Applications as well as standard windows applications, Visual Basic 6 applications and .NET framework applications…

6. How QTP recognizes Objects in AUT?

A). QuickTest stores the definitions for applications objects in a file called the Object Repository. As you record your test, Quick Test will add an entry for each item you interact with. Each object Repository entry will be identified by a logical name (determined automatically by Quick Test), and will contain a set of properties (type, name, etc) that uniquely identify each object.

Each line in the Quick Test script will contain a reference to the object that you interacted with, a call to the appropriate method (set, click, check) and any parameters for that method (such as the value for a call to the set method). The references to objects in the script will all be identified by the logical name, rather than any physical, descriptive properties.

7. What are the types of Object Repositories in QTP?

A). QuickTest has two types of object repositories for storing object information: shared object repositories and action object repositories. You can choose which type of object repository you want to use as the default type for new tests, and you can change the default as necessary for each new test.

The object repository per-action mode is the default setting. In this mode, Quick Test automatically creates an object repository file for each action in your test so that you can create and run tests without creating, choosing, or modifying object repository files. However, if you do modify values in an action object repository, your changes do not have any effect on other actions. Therefore, if the same test object exists in more than one action and you modify an object’s property values in one action, you may need to make the same change in every action (and any test) containing the object.

8. Explain the Check points in QTP?

A). A checkpoint verifies that expected information is displayed in a Application while the test is running. You can add eight types of checkpoints to your test for standard web objects using QTP.

1. A page checkpoint checks the characteristics of Application

2. A text checkpoint checks that a text strings is displayed in the appropriate place on a Application.

3. An object checkpoint(Standard) checks the values of an object on a Application.

4. An image checkpoint checks the values of an image on a Application.

5. A table checkpoint checks information within a table on a Application.

6. An Accessibility checkpoint checks the web page for Section 508 compliance.

7. An XML checkpoint checks the contents of individual XML data files or XML documents that are part of your web application.

8. A database checkpoint checks the contents of databases accessed by your.

9. In how many ways we can add check points to an application using QTP?

A). We can add checkpoints while recording the application or we can add after recording is completed using Active screen (Note: To perform the second one The Active screen must be enabled while recording).

10. How does QTP identifies the object in the application.

A). QTP identifies the object in the application by Logical Name and Class.

For example:

The Edit box is identified by

Logical Name: PSOPTIONS_TIME20

Class: Web Edit.

11. If an application names is changes frequently i.e while recording it has name A). “Window1” and then while running its “Window2” in this case how does QTP handles?

QTP handles those situations “Regular Expressions” .

12. What is parameterizing Tests?

A). When you test application, you may want to check how it performs the same operations with multipie sets of data. For example, suppose you want to check how your application responds to ten separate sets of data. You could record ten separate tests, each with its own set of data. Alternatively, you can create a parameterized test that runs ten times: each time the test runs, it uses a different set of data.

13. What is test object model in QTP?

A). The object model is a large set of object types or classes that Quick Tests uses to represent the objects in your application. Each test object class has a list of properties that can uniquely identify objects of that class and a set of relevant methods that Quick Test can record for it.

A test object is an object that Quick Test creates in the test or component to represent the actual object in your application Quick Test stores information about the object that will help it identify and check the object during the run session.

A run-time object is the actual object in your web site or application on which methods are performed during the run session.

When you perform an operation on your application while recording.

❑ Identifies the Quick Test test object class that represents the object on which you performed the operation and creates the appropriate test object

❑ Reads the current value of the object’s properties in your application and stories the list of properties and values with the test object.

❑ Chooses a unique name for the object, generally using the value of one of its prominent properties.

❑ Records the operation that you performed on the object using the appropriate Quick Test test object method.

For Example, suppose you click on a Find button with the following HTML source code:

Quick Test identifies the object that you clicked as a Web Button test object.

It creates a Web Button object with the name Find, and records the following properties and values for the Find Web Button:

It also records that you performed a Click method on the Web Button.

Quick Test displays your step in the keyboard view like this:

Quick Test displays your step in the Expert View like this:

Browser(“Mercury Interactive”).page(“Mercury Interactive”) .

Web Button (“Find”).Click

14. What is Object Spy in QTP?

A). Using the Object Spy, you can view the properties of any object in open application. You use the Object Spy pointer to point to an object. The Object Spy displays the selected object’s hierarchy tree and its properties and values in the properties tab of the Object Spy dialog box.

15. What is the Difference between Image check-point and Bitmap check-point?

A). Image checkpoint enable you to check the properties of web image. You can check an area of a Web page or application as a bitmap. While creating a test of component, you specify the area you want to check by selecting an object you can check an entire object or any area within an object. Quick Test captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk space For example, suppose you have a web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one check on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly.

You can create bitmap checkpoints all supported testing environments (as long as the appropriate add-ins are loaded).

Note: The results of bitmap checkpoints may be affected by factors such as operating systems, screen resolution, and color settings.

16. How many ways we can parameterize data in QTP?

A). There are four types of parameters:

Test action or component parameters enable you to use values passed

From your test or component or values from other actions in your test.

Data Table parameters enable you to create a data-driven test (or act on)

That runs several times using the data you supply. In each repetition, or iteration, Quick Test uses a different value from the Data Table

Environment variables parameters enable you to use variable values from

Others sources during the run session. These may be values you supply, or values that Quick Test generates for you based on conditions and options you choose.

Random number parameters enable you to insert random numbers as values in your test or component. For example, to check how your application hadles small and large ticked orders, you can have Quick Test generate a random number and insert it in a number of ticket edit field.

17. How do you do batch testing in WR & is it possible to do in QTP, if so explain?

A). Batch Testing in WR is nothing but running the whose test set by selecting “Run Test set” from.

The “Execution Grid”. The same is possible with QTP also. If our test cases are automated then.

By selecting “Run Test set” all the test scripts can be executed. In this process the Scripts.

Get executed one by one by keeping all the remaining scripts in “Waiting” mode.

18. If i give some thousand tests to execute in 2 days what do you do?

A). Adhoc testing is done. It coverts the least basic functionalities to verify that the system

Is working fine

19. What does it mean when a check point is in red color? What do you do?

A). A red color indicates failure. Here we analyze the cause for failure whether it is a

Script Issue or Environment Issue or a Application issue.

20. What do you call the window TestDirector - testlab?

A). “Execution Grid”. It is place from whether we Run all Manual/Automated Scripts

21. How do you create new test sets in TD

A). Login to TD.

Click on “Test Lab” tab.

Select the Desired folder under which we need to create the test set. (Test sets can be grouped as per module).

Click on “New Test Set or Ctrl+N” Icon to create a Test Set.

22. How do you do batch testing in WR & is it possible to do in QTP, if so explain?

A). You can use Test Batch Runner to run several tests in succession. The results for each test are stored in their default location.

Using Test Batch Runner, you can set up a list and save the list as an .mtb file, so that you easily run the same batch of tests again, at another time. You can also choose to include or exclude a test in your batch list from running during a batch run.

23. How to Import data from a “.xls” file to Data table during Runtime.

A). Datatable.Import “…XLS file name…”

Datatable.Importsheet(FileName, SheetSource, SheetDest)

Datatable>ImportSheet “C:\name.xls”,1,”name”.

24. How to export data present in Database to an “.xls” file?

A). DataTable.Export “….xls file name…”

25. What do you to script when object are remove from application?

26. Syntact for how to call one script from another? And syntax to call one “Action” in another?

A). RunAction ActionName,[IterationMode, IterationRange, Parameters]

Here the actions becomes reusable on making this call to any Action.

IterationRange String Not always required. Indicates the rows for which action iterations

Will be performed. Valid only when the IterationMode is rnfIterations. Enter the row range

(i.e “1.7”); or enter rngAll to run iterations on all rows.

If the action called by the RunAction statement includes an ExitAction statement, the RunAction.

Statement can return the value of the ExitAction’s RetVal argument.

27. How to export QTP results to an “.xls” file?

A). By default it creates an “XML” file and displays the results.

28. 3 differences between QTP & WinRunner?

A).

a) QTP is object bases Scripting (VBS) where WinRunner is TSL (C based) Scriptting.

b) QTP supports “.NET” application Automation not available in WinRunner

c) QTP has “Active Screen” support which captures the application not available in WR.

d) QTP has “Data Table” to store script values, variables which WR does not have.

e) Using a “point and click” capability you can easily interface with objects, their.

f) Definitions and create checkpoints after having record a script – without having.

To navigate back to that location in your application like you have to with WinRunner.

This greatly speeds up script development.

29. How to add a Runtime parameter to a datasheet?

A). DataTable.LocalSheet

The following example uses the LocalSheet property to return the local sheet of

The run-time Data Table in order to add a parameter (column) to it.

My param=DataTable.LocalSheet.AddParameter(“Time”,”5:45”)

30. What scripting language is QTP of?

A). Vbs.

31. Analyzing the Checkpoint results?

A). Standard Checkpoint: By adding standard checkpoints to your tests or components,you can

Compare the expected values of object properties to the object’s current

Values during a run session, If the results do not match, the checkpoint

Fails.

32. Table and DB Checkpoints:

A). By adding table checkpoints to your tests or components, you can check that a specified value.

Is display in a cell in a table on your application. By adding database checkpoints to your.

Tests or components, you can check the contents of database accessed by your application.

The results displayed for table and database checkpoints are similar. When you run test or

Component, Quick Test compares the expected results of the check points to the actual results of

The run session, If the results do not match, the checkpoint fails.

You can check that a specified value is displayed in a cell in a table by adding a table.

Chechpoint to your test or component. For Activex tables, you can also check the properties of

The table object. To add a table checkpoint, you use the Checkpoint properties dialog box.

Table checkpoints are supports for Web and AvtiveX applivation, as well as for variefy of

External add-in environments.

You can use database checkpoints in your test or component to check databases accessed by your

Web site or application and to delete defects. You define a query on your database, and then

You create a database checkpoint that checks the results of the query.

Database checkpoints are supported for all environments supported by Quick Test, by default, as

Well as for a variety of external add-in environments

There are two ways to define a database query

a) Use Microsoft Query. You can install Microsoft Query from the custom installation of Microsoft

Office.

b) Manually define an SQL statement.

The checkpoints timeout option is available only when creating a table checkpoint. It is not

Available when creating a database checkpoint

33. Checking Bitmaps:

A). You can check an area of a web page or application as a bitmap. While creating a test or component, you specify the area you want to check by selecting an object. You can check an entire object or any area within an object. Quick Test captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk space.

When you run the test or component, Quick Test compares the object or selected area of the object currently displayed on the web page or application with the bitmap stored when the test or component was recorded. If there are differences, Quick Test captures a bitmap of the actual object and displays it with the expected bitmap in the details portion of the Test Results window. By comparing the two bitmaps (expected and actual), you can identify the nature of the discrepancy. For more information on test results of a checkpoint, see Viewing Checkpoint Results.

For example, suppose you have a web site that can display a map of city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly.

You can create bitmap checkpoints for all supported testing environments (as long as the appropriate add-ins are loaded).

Note: The results of bitmap checkpoints may be affected by factors such as operating system, screen resolution, and color settings.

34. Text/Text Area checkpoint:

A). In the Text/Text Area Checkpoint propertied dialog box, you can specify the text to be checked as well as which text is displayed before and after the checked text. These configuration options are particularly helpful when the text string you want to check appears several times or when it could change in a predictable way during run session.

Note: In windows-based environments, if there is more than one line of text selected, the Checkpoint Summary pane display [complex value] instead of the selected text for the checkpoint.

Quick Test automatically displays the Checked Text in red and the text before and after the checked text in blue. For text area checkpoints, only the text string captured from the defined area is displayed (Text Before and Text After are not displayed).

To designate parts of the captured string as checked Test and other parts as Text Before and Text After, click configure button. The Configure Text Selection dialog box opens.

Checking XML :

XML (Extensible Markup Language)is a meta-makeup language for text documents that is endorsed as a standard by the W3C. XML makes the complex data structure portable between different computer environments/operating systems and programming languages, facilitating the sharing of data.

XML files contain text with simple tags that describe the data within an XML document. These tags describe the data content, but not the presentation of the data. Applications that display an XML document or file use either Cascading style sheets (CSS) of XSL Formatting objects (XSL-FO) to present the data.

You can verify the data content of XML files by inserting XML checkpoints. A few common uses of XML checkpoints are described below:

An XML file can be a static data file that is accessed in order to retrieve commonly used data for which a quick response times is needed-for example, country names, zip codes, or area codes. Although this data can change over time, it is normally quite static. You can use an XML file checkpoint to validate that the data has not changed from one application release to another.

An XML file can consist of elements with attributes and values (character data). There is a parent and child relationship between the elements, and elements can have attributes associated with them. If any part of this structure (including data) changes, you application’s ability to process the XML file may be affected. Using an XML checkpoint, you can check the content of an element to make sure that its, attributes, and values have not changed.

XML files are often an intermediary that retrieves dynamically changing data from one system. The data is then accessed by another system using Document. Type Definitions (DTD), enabling the accessing system to read and display the information in the file. You can use an XML checkpoint and parameterize the captured data values in order to check an XML document or file whose data changes in a predictable way.

XML documents and files often need a well-defined structure in order to be portable across platforms and development systems. One way to accomplish this is by developing an XML scheme, which describes the structure of the XML elements and data types. You can use scheme validation to check that each item of content in an XML file adheres to the scheme description of the element in which the content is to be placed.

35. Object Repositories types, which & when to use?

A). Deciding which Object Repository Mode to Choose

To choose the default object repository mode and the appropriate object repository mode for each test, you need to understand the differences between the two modes.

In general, the object repository per-action mode is easiest to use when you are creating simple record and run tests, especially under the following conditions:

You have only one, or very few, tests that correspond to a given application, interface, or set of objects.

You do not expert to frequently modify test object properties.

You generally create single-action tests

Conversely, the shared object repository mode is generally the preferred mode when:

You have several tests that test elements of the same application, interface, or set of objects.

You expert the object properties in your application to change from time to time and/or you regularly need to update or modify test object properties.

You often work with multi-action tests and regularly use the Insert copy of Action and Insert Call to Action options.

36. Can we Script any test case with out having object repository? Or Using Object Repository is a must?

A). No. U can script with out Object repository by knowing the window Handlers, spying and recognizing the

Objects logical names and properties available.

37. How to execute a WinRunner Script in QTP?

A).

(a)TSL Test.RunTest, Testpath, TestSet. [parameters] --> Used in QTP 6.0 used for backward compatibility.

Parameters: The test set within Quality Center, in which test runs are stored. Note that this argument is relevant only when working with a test in a Quality Center Project. When the test is not saved in Quality Center, this parameter is ignored.

e.g: TSLTest.RunTest “D:\test1”,””

(b). TSLTest.RunTestEx TestPath, RunMinimized, CloseApp, [parameters]

TSLTest.RunTestEx “C:\WinRunner\Tests\basic_flight”, TRUE, FALSE,

“MyValue”.

CloseApp: Indicates whether to close the WinRunner application when the WinRunner test run ends.

Parameters: Up to 15 WinRunner function argument.

38. How to handle Run-time errors?

A). On Error Resume Next: Causes execution to continue with the statement immediately following the statement that caused the run-time error, or with the statement immediately following the most recent call out of the procedure containing the on Error Resumes Next statement. This allows execution to continue despite a run-time error. You can then build the error-handling routine inline within the procedure.

Using “Err” object msgbox “Error no:” “ & Err.Number &” . “& Err.Description” “& Err.Source & Err.HelpContext”.

39. How to change the run-time value of property for an object?

A). SetToproperty changes the property values used to identify an object during the test run.

Only properties that are included in the test object description can be set.

40. How to retrieve the property of an object?

A). Using “GetRoproperty”.

41. How to open any application during Scripting?

A). SystemUtil, Object used to open and close applications and processes during a run session.

(a). A SystemUtil.Run statement is automatically added to your test when you run an application from the start menu or the Run dialog box while recording a test.

Eg: SystemUtil.Run “Notepad.exe”

SystemUtil.closeDescendentprocesses (closes all the processes opened by QTP)

42. Types of properties that Quick Test learns while recording?

A). (a). Mandatory

(b). Assistive.

In addition to recording the mandatory and assistive properties specified in the object Identification dialog box, Quick Test can also record a bank up ordinal identifier for each test object. The ordinal identifier assigns the object a numerical value that indicates its order relative to other objects with an otherwise identical description (objects that have the same values for all properties specified in the mandatory and assistive property lists). This ordered value enables QuickTest to create a unique description when the mandatory and assistive properties are not sufficient to do so.

43. What is the extension of script and object repository files?

A). Object Repository: .tsr, Script: .mts, Excel: Default.xls

44. How to suppress warnings from the “Test results page”?

A). From the Test results Viewer “Tools>Filters>Warnings”…must be “Unchecked”.

45. When we try to use test run option “Run from Step”, the browser is not launching automatically why?

A). This is default behaviour.

46. Does QTP is “Unicode” compatible?

A). QTP 6.5 is not but QTP 8.0 is expected to be Unicide compatabile by end of December 2004.

47. How to “Turn off” QTP results after running a script?

A). Goto “Tools>Option>Run Tab” and Deselect “View results when run session ends”. But this suppresses only the result window, but a log will be created and can viewed manulaly which cannot be restricted from getting created.

48. How to verify the cursor focus of a certain field?

A). Use “Focus” property of “GetRoProperty” method.

49. Any limitation to XML checkpoints?

A). Mercury has determined that 1.4MB is the maximum size of XML file that QTP 6.5 can handle.

50. How to make arguments optional in a function?

A). This is not possible as default VBS doesn’t support this. Instead you can pass a blank scring and have a default value if arguments are not required.

51. How to covert a string to an interger?

A). CInt()--- > a conversion function available.

52. Inserting a call to Action is not Importing all columns in Datatable of globalsheet. Why?

A). Inserting a call to action will only Import the columns of the Action called.

LOADRUNNER (a)

What is load testing?

Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.

What is Performance testing?

Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.

Explain the Load testing process?

Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.

We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario.

We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.

When do you do load and performance Testing?

We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

What are the components of LoadRunner?

The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.

What Component of LoadRunner would you use to record a Script?

The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.

What Component of LoadRunner would you use to play Back the script in multi user mode?

The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.

What is a rendezvous point?

You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.

What is a scenario?

A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.

Explain the recording mode for web Vuser script?

We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.

Why do you create parameters?

Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.

What is correlation? Explain the difference between automatic correlation and manual correlation?

Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data, which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

How do you find out where correlation is required? Give few examples from your projects?

Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values, which needed to be correlated.  In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.

Where do you set automatic correlation options?

Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.

When do you disable log in Virtual User Generator, When do you choose standard and extended logs?

Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select

Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select

extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.

How do you debug a LoadRunner script?

VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.

How do you write user defined functions in LR? Give me few functions you wrote in your previous project?

Before we create the User Defined functions we need to create the external

library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.

What are the changes you can make in run-time settings?

The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.

Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.

How do you perform functional testing under load?

Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.

What is Ramp up? How do you set this?

This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be

specified. To set Ramp Up, go to ‘Scenario Scheduling Options’

What is the advantage of running the Vuser as thread?

VuGen provides the facility to use multithreading. This enables more Vusers to be run per

generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single

generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of

Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.

If you want to stop the execution of your script on error, how do you do that?

The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the “Continue on error” option in Run-Time Settings.  

What is the relation between Response Time and Throughput?

The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.

Explain the Configuration of your systems?

The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.

How do you identify the performance bottlenecks?

Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario, which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.

If web server, database and Network are all fine where could be the problem?

The problem could be in the system itself or in the application server or in the code written for the application.

How did you find web server related issues?

Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that

occurred during scenario, the number of http responses per second, the number of downloaded pages per second.

How did you find database related issues? - By running “Database” monitor and help of “Data Resource Graph” we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues

Explain all the web-recording options?

What is the difference between Overlay graph and Correlate graph? - Overlay Graph:

It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show’s the current graph’s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.

How did you plan the Load? What are the Criteria?

Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.

What does vuser_init action contain?

Vuser_init action contains procedures to login to a server.

What does vuser_end action contain?

Vuser_end section contains log off procedures.  

What is think time? How do you change the threshold?

Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.

What is the difference between standard log and extended log?

The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.

Explain the following functions:

lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.

Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient

If the graph were to remain relatively flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is constraining the volume of

data delivered. 

Types of Goals in Goal-Oriented Scenario

LoadRunner provides you with five different types of goals in a goal-oriented scenario:

The number of concurrent Vusers

The number of hits per second

The number of transactions per second

The number of pages per minute

The transaction response time that you want your scenario

Analysis Scenario (Bottlenecks):

In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load

increases. At 56 Vusers, there is a sudden, sharp increase in the average response

time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.

What is correlation? Explain the difference between automatic correlation and manual correlation?

Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data, which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

Where do you set automatic correlation options?

Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation.  Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

LOADRUNNER (b)

1. What protocols does LoadRunner support?

Industry standard protocols for example HTTP and ODBC are explicitly supported by LoadRunner. Furthermore any protocol that communicates over a windows socket can be supported.

2. What do I need to know to do load testing in addition to knowing how to use the LoadRunner tool?

In addition to knowing the tool :

- Management aspects of Load Testing, Planning being paramount

- Requirements gathering, Profile/Mix, SLA, Acceptance Criteria....

- an general understanding of the protocol you are working with, developers can be unhelpful

- a basic understanding of C programming

- know that you WILL be working with diminishing timescales and you are really at the END of the lifecycle

- as a result of the above you may have to work unsociable hours including weekends

- Managers and other "Powes that be" - "Box tickers" will not understand your plight

- You need to be able to communicate effectively at all levels with different departments from Business to Dev to Sys Test

- voice your problems as soon as possible - Planning Planning

- Fail to Plan - Plan to FAIL

3. What can I monitor with LoadRunner?

Monitor system bottlenecks during a test run and capture and display the performance data from every server or component.

4. How many users can I emulate with LoadRunner on a PC?

Unlimited, No dead end.,

Depends on system response. That too inturn depends on various factors like entire system configuration etc. If system bottle necks observes in very begining or minimum no of Vusers no further addition of vusers will be considered unless observed bottleneck is resolved.

5. What are the Vuser components in LoadRunner?

ApplicationComponents used are client, database or additionally business application server.)

Web Server works on and through LAN,WAN,or www connection.

Application Server components are client, business server and database server without use of but through Protocols like FTP.

6. LoadRunner Function - How to get current system time

This function is developed to usein Mercury LoadRunner peformance tool. This main use of this functions to return the current system time at any given point of time while LoadRunner script is running.This functiona can be used to report transaction times , script starti time and end time.

long get_secs_since_midnight(void)

{

char * curr_hr; /* pointer to a parameter with current clock hr */

char * curr_min; /* pointer to a parameter with current clock min*/

char * curr_sec; /* pointer to a parameter with current clock sec */

long current_time, /* current number of seconds since midnight */

hr_secs, /* current hour converted to secs */

min_secs, /* current minutes converted to secs */

secs_secs; /* current number of seconds */

curr_hr = lr_eval_string("{current_hr}>");

curr_min = lr_eval_string("{current_min}");

curr_sec = lr_eval_string("{current_sec}");

hr_secs = (atoi(curr_hr)) * 60 * 60;

min_secs = (atoi(curr_min)) * 60;

secs_secs = atoi(curr_sec);

current_time = hr_secs + min_secs + secs_secs;

return(current_time);

}

7. What are the reasons why parameterization is necessary when load testing the Web server and the database server.

Parameterization is generally done to test with multiple set of data or records.

8. what is LoadRunner.

LoadRunner accurately measure and analysis the system performance and its functionality.

9. When LoadRunner is used.

When multiple users work concurrently.

10.What is the advantage of using LoadRunner.

1-loadrunner automatically records the performance of the client/server during test. 2-loadrunner checks where performance delays occur network/client delays. 3-loadrunner monitor the network and server resource to help the improve performance.

11. What is scenario?

A scenario defines the events that occur during is testing session. Exam (deposit cash, withdraw money…).

12. what is the vuser in the scenario.

LoadRunner replace the human user with vuser.

13. What is vuser script?

While run a scenarion every vuser execute a script that script known as vuser script .

14. What the vuser script contain.

The vuser script includes the function that measure and record the performance of the server during the scenario.

15. What is transaction?

Transaction measure the time, which takes for the server to respond to task submitted by the vuser.

16. What is rendezvous point.

To emulate peak load on the server.

17. When the rendezvous point is insert.

When multiple vuser to perform tasks at exactly the same time then insert the rendezvous point to emulate the peak load on the server.

18. What is LoadRunner controller?

Controller is manage and maintain the scenario. using controller you control all the vuser in single work station .

19. what is Host.

Host is machine which execute the vuser script.

20. what are the LoadRunner testing process.

There are 5 steps.

1-planning the test.

2-creating the vuser script.

3-creating the scenario.

4- running the scenario.

5-analysis the test result.

21. what is planning for the test.

Define the performance testing requirements for example no. of concurrent users, typical business processes and required response time.

22. what do you mean by creating vuser script.

Creating vuser script for emulate the action that virtual user Perform during the scenario execution.

23. what are the process for developing a vuser script.

There are 5 steps for developing a vuser script.

1-recording the vuser script .

2-edit the vuser script.

3-runtime setting .

4-run the vuser script in stand-alone mode.

5-incorporate the vuser script into a LoadRunner scenario.

24. how to create a scenario?

We have to install LoadRunner controller to the host . Then we include list of host(where vuser script execute) then list of vuser script (where vuser run) and then list of vuser that run during the scenario.

25. what do you mean by Remote Command Launcher(RCL).

Rcl enables the controller to start the application on the Host machine .

26. what is LoadRunner Agent.

Agent is interface between host machine and controller.

27. how you load a LoadRunner Agent.

Controller instruct the remote command luncher to lunch the Agent .

28. how many types of vuser are available.

There are several type of vuser(GUI ,Database ,RTE(terminal emulator), SAP, DCOME, People soft, java, Baan)

29. what is GUI vuser and on which platform it will run.

GUI vuser operate graphical user interface application and it can run in either the MS-Windows / X-Windows environment .

31. what is MS-windows.

WinRunner used for MS-Window application .

32. what is X-Windows.

X-runner and VX-runner for X-Windows application.

33. What is LoadRunner API function.

Data base vuser do not operate client application. Using LoadRunner API function the database vuser can access the data from the server.

34. how you develop the database vuser script.

Developing the database vuser script either by recording with LoadRunner vuser script generator (VuGen) or by using LoadRunner vuser script template.

35. how many section database vuser script have.

3 section ,written in code that assemble in C, SQL call to the database, written in TSL(test script language).

36. how you enhance the basic script.

By adding control-flow, structure, by inserting transaction point and rendezvous point, adding functions

37. what is run-time-setting.

Run-time-setting include loop.log and timing information.

38. what is stand-alone mode.

To verify that the script runs correctly.

39. what type of function generate and insert by the vugen to the script when you record a script.

1-LR Function.(vuser function) 2- protocol function.

40. what is LR-function.

obtain the information about vuser running in a scenario .

41. what is protocol function.

Obtain the information about the type of vuser.

42. what are the section contain by the vugen while creating a vuser script.

Vugen contain the 3 section .

1-vuser-init

2-action.

3-vuser-end.

43. what is vuser-init section.

Record a log in to the server(vuser initialize loaded).

44. what is action section.

Record the client activity .

45. what is vuser-end section.

Record a log off in to the server (vuser stoped).

46. how vugen create a vuser script.

By recording the activity between client and server.

47. How you edit the script.

While editing the script we have to inserting the transaction point and rendezvous point.

48. what is the LoadRunner start-transaction and its syntax.

It will start the transaction on the script. Syntax. Lr-start-transaction("transaction name").

49. what is the LoadRunner end transaction and its syntax.

It will end the transaction. Syntax. Lr-end-transaction("transaction name", LR-AUTO).

50. where you insert the rendezvous point.

Rendezvous point insert in to the script to calculate the peak load of the server. Syntax. lr-rendezvous("rendezvous name").

51. what are the element in the LoadRunner controller.

Title bar(name of the scenarion presently working). Menu bar(selecting the various command). Tool bar. Status bar.

52. what are the 5 icons appear in the buttom of the controller windows.

1-host windows(list of machine).

2-script windows(list of all the vuser script)

3-rendezvous windows.

4-transaction windows(display all the transaction) .

5-output window( display error and notification message).

53. what is .lrs.

LoadRunner save the information in a scenario files.

54. what is scenario wizard.

Through scenario wizard we can create a new scenario.

55. what is filtering and sorting.

We can filter the information display only those items that meet the selected criteria(filter box) .exam you can filter vuser only those who are in ready state. Sorting - we can sort all the vuser in the vuser list. In order to their vuser ID(1,2,3,4,5,6,7,8,9).

56. what are the information crating for each host.

1-the status of the host.

2-the platform type of the host(windows/unix).

3-details of the scenario.

57. how to create a host list for a scenario.

1-install remote command luncher on every machine.

2-add the name of the host to the host lists.

3-set attributes for each host.

4-select which hosts will take part in the scenario.

59. what the host attributes determine.

1-the maximum number of vuser that host can run.

2-the initialization quota .

3-the location of the WinRunner configuration file.

4. the location of the file during run-time.

60. how you set maximum number of vuser that a host can run.

We can modify the maximum number of vuser according to the (available resource , the needs of your scenario, LoadRunner license agreements).

61. what do you mean by initialization of quota.

Capabilities of the host that at a time how many vuser Are initialize .

62. when the LoadRunner controller open the WinRunner file then what is the location of the winner configuration file.

Wrun.ini.

63. what is scenario default.

Instruct the vuser to use the WinRunner configuration file.

64. what is local configuration file.

Instruct the vuser to use hosts WinRunner configuration file.

65. what do you mean by path.

Use WinRunner configuration file that is in a specific location on the network.

66. during run time where the hosts saves the files.

In temporally in the local drive of each host.

67. what is script list.

It contain all the vuser script that vuser can run.

68. what are the information contain by script windows for each script in the list.

1-name of the vuser script .

2-the type of the vuser.

3-the location(path).

4-command line option.

69. How to modify the script.

Using vuser script information dialog box.

70. what is the purpose of running the scenario.

To check the response time of the client/server system under load.

71. why we insert the rendezvous point while running the scenario.

If a multiple vuser to perform a tasks at exactly the same time.

72. when a scenario run exactly what happened.

1-The controller check the scenario configuration information.

2-then next it invoke the application that you select to run with the scenario .

3- then transform each script to its related hosts, when the vuser are ready they start execution.

73. how to run a scenario.

Open an existing scenario .

Configure the scenario.

Set the result directory.

Run the scenario.

74. when you initialize the vuser what happen.

The vuser status change from DOWN to PENDING to INITILIZAING to READY. If vuser fails to initialize , the vuser status changes to ERROR.

75. what is pause command.

It changes the status of the vuser from RUNNING TO PAUSE.

76. what is running virtual user graph.

It displays the number of the vuser that execute vuser script during each second of the scenario run. Only running and rendez state are include.(loading, ready and pause are not displayed).

77. What is report viewer?

Each report viewer contain the report header and report viewer tool bar.

78. what is report header and what are the information contains.

It display general scenario information and it contain the information like (title, scenario, result start time, end time and duration).

79. what is rendezvous graph.

It indicate when vuser were released from rendezvous point and how many vuser are released from each point.it help the transaction performance time .

80. what is transaction per second graph(pass).

It display the number of complited , successful transaction perform during each second of scenario run.

81. what in percentile graph.

The percentage of transaction that were performed within a given time range.

82. what is transaction performance graph.

Display the average time taken to perform transaction during each second of the scenario run.

83. What are all the types of correlation?

Manual & Automatic Co-relation

Manual correlation - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries.

Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data, which are created by these rules.

Rational Robot

What Is Rational Robot?

Rational Robot is a complete set of components for automating the testing of Microsoft Windows client/server and Internet applications.

The main component of Robot lets you start recording tests in as few as two mouse clicks. After recording, Robot plays back the tests in a fraction of the time it would take to repeat the actions manually.

Which products does Rational Robot Installs with?

ClearQuest - Change-Request management tool that tracks and manages defects and change requests through the development process.

Rational LogViewer and Comparators- are the tools you use to view logs and test results created when you playback scripts.

Rational Robot - is the tool that you used to develop both GUI and VU (virtual user) scripts.

SQL Anywhere - A database product to help create, maintain and run your Rational repositories.

Rational Test Manager - is the component that you use to plan your tests, manage your test asses, and run queries and reports.

Rational Administrator - is the component that you use to create and manage repositories.

Rational SiteCheck - is the component that you use to test the structural integrity of your intranet or www site.

Additional Rational Products available only with Rational Suite TestStudio or PerformanceStudio:

Test Factory - Component based testing tool that automatically generates TestFactory scripts according to the applications navigational structure.

Diagnostic Tools

Rational Purify - is a comprehensive C/C++ run time error checking tool.

Rational Visual Quantify - is an performance profiler that provides performance analysis of a product, to aid in improving performance of the code.

Rational Visual PureCoverage - is a customizable code coverage analysis tool that provides detailed application analysis and ensure that all code has been exercised.

Performance Studio - Tool used for automating performance tests on client/server systems.

Rational Synchronizer - Tool used to share data from rational rose, requisite pro and rational robot.

RequisitePro - Create, define requirements for your development process. The baseline version is incorporated into Rational Team Test. The full version in Rational Studio TestSuite allows you to customize requirements databases, and additional features like tracability, change notification and attribute management.

What is Rational Administrator?

Use the Rational Administrator to:

• create and manage projects.

• Create a project under configuration management.

• Create a project outside of configuration management.

• Connect to a project.

• See projects that are not on your machine (register a project).

• Delete a project.

• Create and manage users and groups for a Rational Test datastore.

• Create and manage projects containing Rational RequisitePro projects and Rational Rose models.

• Manage security privileges for the entire Rational project.

• Configure a SQL Anywhere database server.

What two kind of GUI scripts using Rational Robot?

1. functional testing

2. sessions for performance testing.

• Perform full functional testing. Record and play back scripts that navigate through your application and test the state of objects through verification points.

• Perform full performance testing. Use Robot and TestManager together to record and play back sessions that help you determine whether a multi-client system is performing within user-defined standards under varying loads.

• Create and edit scripts using the SQABasic and VU scripting environments. The Robot editor provides color-coded commands with keyword Help for powerful integrated programming during script development. (VU scripting is used with sessions in performance testing.)

• Test applications developed with IDEs such as Java, HTML, Visual Basic, Oracle Forms, Delphi, and PowerBuilder. You can test objects even if they are not visible in the application’s interface.

Collect diagnostic information about an application during script playback. Robot is integrated with Rational Purify, Rational Quantify, and Rational PureCoverage. You can play back scripts under a diagnostic tool and see the results in the log

. What is datapool?

A datapool is a source of variable test data that scripts can draw from during playback.

How to create a datapool?

When creating a datapool, you specify the kinds of data (called data types) that the script will send for example, customer names, addresses, and unique order numbers or product names. When you finish defining the datapool, TestManager automatically generates the number of rows of data that you specify.

How to analyz results in the log and comparators

You use TestManager to view the logs that are created when you run scripts and schedules.

Use the log to:

--View the results of running a script, including verification point failures, procedural failures, aborts, and any additional playback information. Reviewing the results in the log reveals whether each script and verification point passed or failed.

Use the Comparators to:

--Analyze the results of verification points to determine why a script may have failed. Robot includes four Comparators:

.Object Properties Comparator

.Text Comparator

.Grid Comparator

.Image C omparator

Rational SiteCheck

Rational SiteCheck to test the structural integrity of your intranet or World Wide Web site. SiteCheck is designed to help you view, track, and maintain your rapidly changing site. Use SiteCheck to:

• Visualize the structure of your Web site and display the relationship between each page and the rest of the site.

• Identify and analyze Web pages with active content, such as forms, Java, JavaScript, ActiveX, and Visual Basic Script (VBScript).

• Filter information so that you can inspect specific file types and defects, including broken links.

• Examine and edit the source code for any Web page, with color-coded text.

• Update and repair files using the integrated editor, or configure your favorite HTML editor to perform modifications to HTML files.

• Perform comprehensive testing of secure Web sites. SiteCheck provides Secure Socket Layer (SSL) support, proxy server configuration, and support for multiple password realms.

What is A verification point?

A verification point is a point in a script that you create to confirm the state of an object across builds of the application-under-test.

Verification point type

1. Alphanumeric:

Captures and tests alphanumeric data in Windows objects that contain text, such as edit boxes, check boxes, group boxes, labels, push buttons, radio buttons, toolbars, and windows (captions).

2. Clipboard:

Captures and compares alphanumeric data that has been copied to the Clipboard.

3. File Comparison:

Compares two specified files during playback.

4. File Existence:

Verifies the existence of a specified file during playback.

5. Menu:

Captures and compares the menu title, menu items, shortcut keys, and the state of selected menus.

6. Module Existence:

Verifies whether a specified module is loaded into a specified context (process), or is loaded anywhere in memory.

7. Object Data

Captures and compares the data inside standard Windows objects.

8. Object Properties

Captures and compares the properties of standard Windows objects.

9. Region Image

Captures a region of the screen as a bitmap.

10. Web Site Compare

Captures a baseline of a Web site and compares it to the Web site at another point in time.

11. Web Site Scan

Checks the contents of a Web site with every revision and ensures that changes have not resulted in defects.

12. Window Existence:

Verifies the existence and status of a specified window during playback.

13. Window Image:

Captures a window as a bitmap.

How to create a verification point?

1. Do one of the following:

. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar.

. If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar.

2. Click a verification point button on the GUI Insert toolbar.

3. In the Verification Point Name dialog box, edit the name as appropriate. The name can be a maximum of 20 characters.

4. Optionally, set the Wait state options. For information, see the next section, Setting a Wait State for a Verification Point.

5. Optionally, set the Expected result option.

6. Click OK.

How to add a wait state when creating a verification point?

1. Start to create the verification point.

2. In the Verification Point Name dialog box, select Apply wait state to verification point.

3. Type values for the following options: Retry every - How often Robot retries the verification point during playback. Robot retries until the verification point passes or until the timeout limit is reached.

Timeout after - The maximum amount of time that Robot waits for the verification point to pass before it times out. If the timeout limit is reached and the verification point has not passed, Robot enters a failure in the log. The script playback either continues or stops based on the setting in the Error Recovery tab of the GUI Playback Options dialog box.

How to set the expected result when creating a verification point?

1. Start to create a verification point.

2. In the Verification Point Name dialog box, click Pass or Fail.

What are two verification points for use with Web sites

1. Use the Web Site Scan verification point to check the content of your Web site with every revision and ensure that changes have not resulted in defects.

2. Use the Web Site Compare verification point to capture a baseline of your Web site and compare it to the Web site at another point in time.

How to select the object to test?

1. Start creating the verification point.

2. In the Verification Point Name dialog box, type a name and cl ick OK to open the Select Object dialog box.

3. Do one of the following:

. Select Automatically close dialog box after object selection to have the Select Object dialog box close after you select the object to test.

.Clear Automatically close dialog box after object selection to have the Select Object dialog box reappear after you select the object to test. You will need to click OK to close the dialog box. To select a visible object directly from the application, continue with step 4. To select an object from a list of all objects on the desktop, skip to step 5.

4. To select a visible object directly from the application, drag the Object Finder tool over the object and release the mouse button.

5. To select a visible or hidden object from a list of all objects on the Windows desktop, click Browse to open the Object List dialog box. Select the object from the list and click OK.

What's verification method?

The verification method specifies how Robot compares the baseline data captured while recording with the data captured during playback.

Eight verification methods

1. Case-Sensitive - Verifies that the text captured during recording exactly matches the captured text during playback.

2. Case-Insensitive - Verifies that the text captured during recording matches the captured text during playback in content but not necessarily in case.

3. Find Sub String Case-Sensitive - Verifies that the text captured during recording exactly matches a subset of the captured text during playback.

4. Find Sub String Case-Insensitive - Verifies that the text captured during recording matches a subset of the captured text during playback in content but not necessarily in case.

5. umeric Equivalence - Verifies that the values of the data captured during recording exactly match the values captured during playback.

6. Numeric Range - Verifies that the values of the data captured during recording fall within a specified range during playback. You specify the From and To values for the numeric range. During playback, the verification point verifies that the numbers are within that range.

7. User-Defined and Apply a User-Defined DLL test function - Passes text to a function within a dynamic-link library (DLL) so that you can run your own custom tests. You specify the path for the directory and name of the custom DLL and the function. The verification point passes or fails based on the result that it receives back from the DLL function.

8. Verify that selected field is blank - Verifies that the selected field contains no text or numeric data. If the field is blank, the verification point passes.

What's an identification method?

An identification method tells Robot how to identify the values to compare during record and playback.

There are four identification methods

1. By Content - to verify that the recorded values exist during playback.

2. By Location - to verify that the recorded values exist in the same locations during playback.

3. By Title - to verify that the recorded values remain with their titles (names of menus or columns) during playback, even though the columns may have changed locations.

4. By Key/Value - to verify that the recorded values in a row remain the same during playback.

How to rename a verification point and its associated files?

1. Right-click the verification point name in the Asset (left) pane and click Rename.

2. Type the new name and press ENTER.

3. Click the top of the script in the Script (right) pane.

4. Click Edit > Replace.

5. Type the old name in the Find what box. Type the new name in the Replace with box.

6. Click Replace All.

How to copy a verification point?

1. Right-click the verification point in the Asset (left) pane and click Copy.

2. In the same script or in a different script (in the same project), right-click Verification Points in the Asset pane.

3. Click Paste to paste a copy of the verification point and its associated files into the project. If a verification point with that name already exists, Robot appends a unique number to the name. You can also copy and paste by dragging the verification point to Verification Points in the Asset pane.

4. Click the top of the Script (right) pane of the original script.

5. Click Edit > Find and locate the line with the verification point name that you just copied.

6. Select the entire line, which starts with Result=.

7. Click Edit > Copy.

8. Return to the script that you used in step 2. Click the location in the script where you want to paste the line. Click Edit > Paste.

9. Change the name of the verification point to match the name in the Asset pane.

How to delete a verification point and its associated files?

1. Right-click the verification point name in the Asset (left) pane and click Delete.

2. Click the top of the script in the Script (right) pane.

3. Click Edit > Find.

4. Type the name of the deleted verification point in the Find what box.

5. Click Find Next.

6. Delete the entire line, which starts with Result=.

7. Repeat steps 5 and 6 until you have deleted all references.

What's TestManager

Rational TestManager is the one place to manage all testing activities--planning, design, implementation, execution, and analysis. TestManager ties testing with the rest of the development effort, joining your testing assets and tools to provide a single point from which to understand the exact state of your project.

Test Manager supports five testing activities:

1. Plan Test.

2. Design Test.

3. Implement Test.

4. Execute Tests.

5. Evaluate Tests.

Test plan

Test Manager is used to define test requirements, define test scripts and link these requirements and scripts to your test plans (written in word).

Test plan - A test plan defines a testing project so it can be properly measured and controlled. The test plan usually describes the features and functions you are going to test and how you are going to test them. Test plans also discuss resource requirement and project schedules.

Test Requirements

Test requirements are defined in the Requirement Hierarchy in TestManager. The requirements hierarchy is a graphical outline of requirements and nested child requirements.

Requirements are stored in the requisite pro database. Requisite Pro is a tool that helps project teams control the development process by managing and tracking the changes of requirements.

TestManager includes a baseline version of Requisite Pro. The full version with more features and customizations is available in the Rational Suite TestStudio.

TestManager's wizard

TestManager has a wizard that you can use to copy or import test scripts and other test assets (Datapools) from one project to another.

How TestManager manage test logs ?

When a robot scripts runs the output creates a test log. Test logs are managed now is the TestManager application. Rational now allows you to organize your logs into any type of format you need.

You can create a directory structures that suites your need Create build names for each build version (or development) Create folders in which to put the build in.

What's TestFactory

Rational TestFactory is a component-based testing tool that automatically generates TestFactory scripts according to the application’s navigational structure. TestFactory is integrated with Robot and its components to provide a full array of tools for team testing under Windows NT 4.0, Windows 2000, Windows 98, and Windows 95.

With TestFactory, you can:

--Automatically create and maintain a detailed map of the application-under-test.

--Automatically generate both scripts that provide extensive product coverage and scripts that encounter defects, without recording.

--Track executed and unexecuted source code, and report its detailed findings.

--Shorten the product testing cycle by minimizing the time invested in writing navigation code.

--Play back Robot scripts in TestFactory to see extended code coverage information and to create regression suites; play back TestFactory scripts in Robot to debug them.

What's ClearQuest

Rational ClearQuest is a change-request management tool that tracks and manages defects and change requests throughout the development process. With ClearQuest, you can manage every type of change activity associated with software development, including enhancement requests, defect reports, and documentation modifications.

With Robot and ClearQuest, you can:

-- Submit defects directly from the TestManager log or SiteCheck.

-- Modify and track defects and change requests.

-- Analyze project progress by running queries, charts, and reports.

Rational diagnostic tools

Use the Rational diagnostic tools to perform runtime error checking, profile application performance, and analyze code coverage during playback of a Robot script.

• Rational Purify is a comprehensive C/C++ run-time error checking tool that automatically pinpoints run-time errors and memory leaks in all components of an application, including third-party libraries, ensuring that code is reliable.

• Rational Quantify is an advanced performance profiler that provides application performance analysis, enabling developers to quickly find, prioritize and eliminate performance bottlenecks within an application.

• Rational PureCoverage is a customizable code coverage analysis tool that provides detailed application analysis and ensures that all code has been exercised, preventing untested code from reaching the end-user.

TestManager can be used for Performance Testing

Rational Testmanager is a sophisticated tool that can be used for automating performance tests on client/server systems. A client/server system includes client applications accessing a database or application server, and browsers accessing a Web server.

Performance testing uses Rational Robot and Rational TestManager. Use Robot to record client/server conversations and store them in scripts. Use TestManager to schedule and play back the scripts. During playback, TestManager can emulate hundreds, even thousands, of users placing heavy loads and stress on your database and Web servers.

What's RequisitePro

Rational RequisitePro is a requirements management tool that helps project teams control the development process. RequisitePro organizes your requirements by linking Microsoft Word to a requirements repository and providing traceability and change management throughout the project lifecycle.

How to set GUI recording option

To set the GUI recording options:

1. Open the GUI Record Options dialog box by doing one of the following: . Before you start recording, click Tools > GUI Record Options. . Start recording by clicking the Record GUI Script button on the toolbar. In the Record GUI dialog box, click Options.

2. Set the options on each tab.

3. Click OK.

How to Control Robot Responds to Unknown Objects?

1. Open the GUI Record Options dialog box.

2. In the General tab, do one of the following:

-- Select Define unknown objects as type Generic to have Robot automatically associate unknown objects encountered while recording with the Generic object type.

-- Clear Define unknown objects as type Generic to have Robot suspend recording and open the Define Object dialog box if it encounters an unknown object during recording. Use this dialog box to associate the object with an object type.

3. Click OK or change other options.

How to change the object order preference?

1. Open the GUI Record Options dialog box.

2. Click the Object Recognition Order tab.

3. Select a preference in the Object order preference list.

4. Click OK or change other options.

How to create a new object order preference?

1. In an ASCII editor, create an empty text file with the extension .ord.

2. Save the file in the Dat folder of the project.

3. Click To o l s > G UI R e c o r d O p t i o n s .

4. Click the Object Recognition Order tab.

5. From the Object order preferences list, select the name of the file you created.

6. Change the method order to customize your preferences.

How to defining an Object Class Mapping?

1. Identify the class name of the window that corresponds to the object. You can use the Spy++ utility in Visual C++ to identify the class name. You can also use the Robot Inspector tool by clicking Tools > Inspector.

2. In Robot, click Tools > General Options, and then click the Object Mapping tab.

3. From the Object type list, select the standard object type to be associated with the new object class name.

Robot displays the class names already available for that object type in the Object classes list box.

4. Click Add.

5. Type the class name you identified in step 1 and click OK.

6. Click OK.

Modifying or Deleting a Custom Class Name

1. Click Tools > General Options, and then click the Object Mapping tab.

2. From the Object type list, select the standard object type that is associated with the object class name.

Robot displays the class names already available for that object type in the Object classes list.

3. From the Object classes list, select the name to modify or delete.

4. Do one of the following:

- To modify the class name, click Modify. Change the name and click OK.

- To delete the object class mapping, click Delete. Click OK at the

confirmation prompt.

5. Click OK.

How to to record a GUI script

1. Prepare to record the script.

2. If necessary, enable your application for testing.

3. Make sure your recording options are set appropriately for the recording session.

4. Click the Record GUI Script button on the toolbar to open the Record GUI dialog box.

5. Type a name (40 characters maximum) or select a script from the list. The listed scripts have already been recorded in Robot, or generated in TestFactory. To change the list, select a query from the Query list. The query lets you narrow down the displayed list, which is useful in projects with hundreds of scripts. You create queries in TestManager, and you modify queries in TestManager or Robot.

If a prefix has been defined for script autonaming, Robot displays the prefix in the Name box. To edit this name, either type in the Name box, or click Options, change the prefix in the Prefix box, and click OK.

6. To change the recording options, click Options. When finished, click OK.

7. If you selected a previously recorded script, you can change the properties by clicking Properties. When finished, click OK. To change the properties of a new script, record the script first. After recording, click File > Properties.

8. Click OK to start recording. The following events occur: . If you selected a script that has already been recorded, Robot asks if you want to overwrite it. Click Yes. (If you record over a previously-recorded script, you overwrite the script file but any existing properties are applied to the new script.)

. Robot is minimized by default.

. The floating GUI Record toolbar appears. You can use this toolbar to pause or stop recording, display Robot, and insert features into a script.

9. Start the application-under-test as follows: a. Click the Display GUI Insert Toolbar button on the GUI Record toolbar.

b. Click the appropriate Start button on the GUI Insert toolbar.

c. Fill in the dialog box and click OK.

10. Perform actions as needed to navigate through the application.

11. Insert features as needed. You can insert features such as verification points, comments, and timers.

12. If necessary, switch from Object-Oriented Recording to low-level recording. Object-Oriented Recording examines Windows GUI objects and other objects in the application-under-test without depending on precise timing or screen coordinates. Low-level recording tracks detailed mouse movements and keyboard actions by screen coordinates and exact timing.

13. When finished, click the Stop Recording button on the GUI Record toolbar. The Robot main window appears as follows:

- The script that you recorded appears in a Script window within the Robot main window.

- The verification points and low-level scripts in the script (if any) appear in the Asset pane on the left.

- The text of the script appears in the Script pane on the right.

14. Optionally, change the script properties by clicking File > Properties.

Restoring the Robot Main Window During Recording

When Robot is minimized or is hidden behind other windows during recording, you can bring it to the foreground in any of the following ways:

. Click the Open Robot Window button on the GUI Record toolbar.

. Click the Robot button on the Windows taskbar.

. Use the hot key combination CTRL+SHIFT+F to display the window and CTRL+SHIFT+H to hide the window.

Setting GUI Recording Options

1. Open the GUI Record Options dialog box by doing one of the following:

--Before you start recording, click Tools > GUI Record Options.

-- Start recording by clicking the Record GUI Script button on the toolbar.

In the Record GUI dialog box, click Options.

2. Set the options on each tab.

3. Click OK.

Naming Scripts Automatically

1. Open the GUI Record Options dialog box.

2. In the General tab, type a prefix in the Prefix box.

Clear the box if you do not want a prefix. If the box is cleared, you will need to type a name each time you record a new script.

3. Click OK or change other options.

The next time you record a new script, the prefix and a number appear in the Name box of the Record GUI dialog box.

In the following figure, the autonaming prefix is Test. When you record a new script, Test7 appears in the Name box because there are six other scripts that begin with Test.

How to change the object order preference?

1. Open the GUI Record Options dialog box.

2. Click the Object Recognition Order tab.

3. Select a preference in the Object order preference list.

4. Click OK or change other options.

How to change the order of the object recognition methods for an object type?

1. Open the GUI Record Options dialog box.

2. Click the Object Recognition Order tab.

3. Select a preference in the Object order preference list. If you will be testing C++ applications, change the object order preference to C++ Recognition Order.

4. From the Object type list, select the object type to modify. The fixed set of recognition methods for the selected object type appears in the Recognition method order list in its last saved order.

5. Select an object recognition method in the list, and then click Move Up or Move Down. Changes made to the recognition method order take place immediately, and cannot be undone by the Cancel button. To restore the original default order, click Default.

6. Click OK.

Important Notes:

. Changes to the recognition method order affect scripts that are recorded after the change. They do not affect the playback of scripts that have already been recorded.

. Changes to the recognition method order are stored in the project. For example, if you change the order for the CheckBox object, the new order is stored in the project and affects all users of that project.

. Changes to the order for an object affect only the currently-selected preference. For example, if you change the order for the CheckBox object in the preference, the order is not changed in the C++ preference.

How to create a new object order preference?

1. In an ASCII editor, create an empty text file with the extension .ord.

2. Save the file in the Dat folder of the project.

3. Click To o l s > G UI R e c o r d O p t i o n s .

4. Click the Object Recognition Order tab.

5. From the Object order preferences list, select the name of the file you created.

6. Change the method order to customize your preferences.

How to define an object class and map an object type to it?

1. Identify the class name of the window that corresponds to the object. You can use the Spy++ utility in Visual C++ to identify the class name. You can also use the Robot Inspector tool by clicking Tools > Inspector.

2. In Robot, click Tools < General Options, and then click the Object Mapping tab.

3. From the Object type list, select the standard object type to be associated with the new object class name. Robot displays the class names already available for that object type in the Object classes list box.

4. Click Add.

5. Type the class name you identified in step 1 and click OK.

6. Click OK.

How to modify or delete a custom class name?

1. Click Tools > General Options, and then click the Object Mapping tab.

2. From the Object type list, select the standard object type that is associated with the object class name. Robot displays the class names already available for that object type in the Object classes list.

3. From the Object classes list, select the name to modify or delete.

4. Do one of the following:

. To modify the class name, click Modify. Change the name and click OK.

. To delete the object class mapping, click Delete. Click OK at the confirmation prompt.

5. Click OK.

Pausing and Resuming the Recording of a Script

To pause recording:

--Click the Pause button on the GUI Record toolbar. Robot indicates a paused state by:

----Depressing the Pause button.

----Displaying Recording Suspended in the status bar.

----Displaying a check mark next to the Record > Pause command.

To resume recording:

-- Click Pause again.

----Always resume recording with the application-under-test in the same state that it was in when you paused.

Robot has two recording modes

1. Object-Oriented Recording mode

Examines objects in the application-under-test at the Windows layer during recording and playback. Robot uses internal object names to identify objects, instead of using mouse movements or absolute screen coordinates. If objects in your application’s graphical user interface (GUI) change locations, your tests still pass because the scripts are not location dependent. As a result, Object-Oriented Recording insulates the GUI script from minor user interface changes and simplifies GUI script maintenance.

2. Low-level recording mode

Tracks detailed mouse movements and keyboard actions by screen coordinates and exact timing. Use low-level recording when you are testing functionality that requires the tracking of detailed mouse actions, such as in painting, drawing, or CAD applications.

To switch between the two modes during recording, do one of the following:

------Press CTRL+SHIFT+R.

------Click the Open Robot Window button on the GUI Record toolbar (or press CTRL+SHIFT+F) to bring Robot to the foreground. Click Record > Turn Low-Level Recording On/Off.

How to end the recording of a script?

Click the Stop Recording button on the GUI Record toolbar.

How to define script properties?

1. Do one of the following:

. If the script is open, click File > Properties.

. If the script is not open, click File > Open > Script. Select the script and click the Properties button.

2. In the Script Properties dialog box, define the properties. For detailed information about an item, click the question mark near the upper-right corner of the dialog box, and then click the item.

3. Click OK.

How to code a script manually?

1. In Robot, click File > New > Script.

2. Type a script name (40 characters maximum) and, optionally, a description of the script.

3. Click GUI.

4. Click OK. Robot creates an empty script with the following lines:

Sub Main

Dim Result As Integer

'Initially Recorded: 01/17/05 14:55:53

'Script Name: GUI Script

End Sub

5. Begin coding the GUI script.

How to add a new action to an existing script?

1. If necessary, open the script by clicking File > Open > Script.

2. If you are currently debugging, click Debug > Stop.

3. In the Script window, click where you want to insert the new actions. Make sure that the application-under-test is in the appropriate state to begin recording at the text cursor position.

4. Click the Insert Recording button on the Standard toolbar. The Robot window minimizes by default, or behaves as specified in the GUI Record Options dialog box.

5. Continue working with the application-under-test as you normally do when recording a script.

How to add a feature to an existing GUI script?

1. If necessary, open the script by clicking File > Open > Script.

2. If you are currently debugging, click Debug > Stop.

3. In the Script window, click where you want to insert the feature. Make sure that the application-under-test is in the appropriate state to insert the feature at the text cursor position.

4. Do one of the following:

- To add the feature without going into recording mode, click the Display GUI Insert Toolbar button on the Standard toolbar. The Robot Script window remains open. - To start recording and add the feature, click the Insert Recording button on the Standard toolbar. The Robot window minimizes by default, or behaves as specified in the GUI Record Options dialog box. Click the Display GUI Insert Toolbar button on the GUI Record toolbar.

5. Click the appropriate button on the GUI Insert toolbar.

6. Continue adding the feature as usual.

How o batch compile scripts and library source files?

1. Click File > Batch Compile.

2. Select an option to filter the type of scripts or files you want to appear in the Available list: GUI scripts, VU scripts, or SQABasic library source files.

3. Optionally, select List only modules that require compilation to display only those files that have not yet been compiled or that have changed since they were last compiled.

4. Select one or more files in the Available list and click > or >>. Robot compiles the files in the same order in which they appear in the Selected list.

5. Click OK to compile the selected files.

How to insert a timer while recording or editing a script?

1. Do one of the following:

. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar. . If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar.

2. Click the Start Timer button on the GUI Insert toolbar.

3. Type a timer name (40 characters maximum) and click OK. If you start more than one timer, make sure you give each timer a different name.

4. Perform the timed activity.

5. Immediately after performing the timed activity, click the Stop Timer button on the GUI Insert toolbar.

6. Select a timer name from the list of timers you started and click OK.

Playing Back a Script that Includes Timers

1. Click Tools > GUI Playback Options.

2. In the Playback tab, clear Acknowledge results.

This prevents a pass/fail result message box from appearing for each verification point. You can still view the results in the log after playback.

3. In the Playback tab, set the Delay between commands value to 0. This removes any extra Robot timing delays from the performance measurement. If you need a delay before a single command, click Insert > Delay and type a delay value.

4. Click OK.

How to insert a log message into a script during recording or editing?

1. Do one of the following:

. If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar.

. If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar.

2. Click the Write to Log button on the GUI Insert toolbar.

After playback, you can view logs and messages using TestManager.

How to Choose Network Recording?

1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.

2. Click the Method Method Method Method tab, and click Network recording Network recording Network recording Network recording.

3. Optionally, click the Method:Network Method:Network Method:Network Method:Network tab, and select the client/server pair that you will record. The default is to record all of the network traffic to and from your computer.

4. Optionally, click the Generator Filtering Generator Filtering Generator Filtering Generator Filtering tab to specify the network protocols to include in the script that Robot generates.

How to Choose Proxy Recording

1. Click Tools Tools Tools Tools > Session Record Options Session Record Options Session Record Options Session Record Options.

2. Click the Method Method Method Method tab, and click Proxy Proxy Proxy Proxy recording.

3. Click the Method:Proxy Method:Proxy Method:Proxy Method:Proxy tab to:

. Create a proxy computer.

. Identify client/server pairs that will communicate through the proxy.

How to cancelling a Script in a Single-Script Session?

1. During recording, click the Stop button on the Session Record floating toolbar.

2. In the Stop Recording dialog box, click Ignore just-recorded information.

3. Click OK in the Stop Recording dialog box.

4. Click OK to acknowledge that the session is being deleted.

How to cancelling the Current Script in a Multi-Script Session?

1. During recording, click the Split Script button on the Session Record floating toolbar.

2. In the Split Script dialog box, click Ignore just-recorded information.

3. Click OK.

How to cancelling All Scripts in a Multi-Script Session?

1. During recording, click the Stop button on the Session Record floating toolbar.

2. Click OK in the Stop Recording dialog box.

3. Immediately click Cancel in the Generating Scripts dialog box.

When would you want to split a session?

If quick script development time is a priority - perhaps because testable builds are developed daily, or because web content is updated daily.

How to split a session into multiple scripts?

1. During recording, at the point where you want to end one script and begin a new one, click the Split Script button on the Session Record floating toolbar.

2. Enter a name for the script that you are ending, or accept the default name.

3. Click OK.

4. Repeat the previous steps as many times as needed to end one script and begin another. 5. After you click the Stop Recording button to end the recording session, type or select a name for the last script you recorded, or accept the default name.

How to import a session file and regenerate scripts?

You can import a session from a different computer into your current project.

1. In Robot, click Tools > Import Session. The Open dialog box appears.

2. Click the session file, then click Open. The session and its scripts are now in your project.

3. To regenerate the scripts in the session you imported, click Tools > Regenerate Test Scripts from Session, and select the session you imported.

4. To regenerate the suite, click Tools > Rational Suite TestStudio > Rational TestManager.

5. Click File > New Suite. The New Suite dialog box appears.

6. Select Existing Session, and click OK.

7. TestManager displays a list of sessions that are in the project. Click the name of the session that you imported, and click OK.

How to regenerate Scripts from a Session?

1. In Robot, click Tools > Regenerate Test Scripts from Session.

2. Click the name of the session to use.

3. Click OK to acknowledge that the regeneration operation is complete.

How to Define Script Properties in Robot?

1. Click File > Open > Test Script to open the Open Test Script dialog box.

2. Click the script you are defining properties for.

3. Click Properties.

4. Define the script’s properties, and click OK.

Silk test

1. What is SilkTest?

SilkTest is a software testing automation tool developed by Segue Software, Inc.

2. What is the Segue Testing Methodology?

Segue testing methodology is a six-phase testing process:

1. Plan - Determine the testing strategy and define specific test requirements.

2. Capture - Classify the GUI objects in your application and build a framework for running your tests.

3. Create - Create automated, reusable tests. Use recording and/ or programming to build test scripts written in Segue's 4Test language.

4. Run - Select specific tests and execute them against the AUT.

5. Report - Analyze test results and generate defect reports.

6. Track - Track defects in the AUT and perform regression testing.

3. What is AUT?

AUT stands for Application Under Test.

4. What is SilkTest Host?

SilkTest Host is a SilkTest component that manages and executes test scripts. SilkTest Host usually runs on a separate machine different than the machine where AUT (Application Under Test) is running.

5. What is SilkTest Agent?

SilkTest Agent is a SilkTest component that receives testing commands from the SilkTest Host and interacts with AUT (Application Under Test) directly. SilkTest Agent usually runs on the same machine where AUT is running.

6. What is 4Test?

4Test is a test scripting language used by SilkTest to compose test scripts to perform automated tests. 4Test is an object-oriented fourth-generation language. It consists of 3 sets of functionalities:

1. A robust library of object-oriented classes and methods that specify how a testcase can interact with an application’s GUI objects.

2. A set of statements, operators and data types that you use to introduce structure and logic to a recorded testcase.

3. A library of built-in functions for performing common support tasks.

7.What is the DOM browser extension?

Document Object Model (DOM) browser extension is a SilkTest add-on component for testing Web applications. DOM browser extension communicates directly with the Web browser to recognize, categorize and manipulate objects on a Web page. It does this by working with the actual HTML code, rather than relying on the visual pattern recognition techniques currently employed by the Virtual Object (VO) extension.

8. What is the VO browser extension?

Virtual Object (VO) browser extension is a SilkTest add-on component for testing Web applications. VO browser extersion uses sophisticated pattern recognition techniques to identify browser-rendered objects. The VO extension sees Web pages as they appear visually; it does not read or recognize HTML tags in the Web application code. Instead, the VO extension sees the objects in a Web page; for example, links, tables, images and compound controls the way that you do, regardless of the technology behind them.

9. What is SilkTest project?

A SilkTest project is a collection of files that contains required information about a test project.

10. How to create a new SilkTest project?

1. Run SilkTest.

2. Select Basic Workflow bar.

3. Click Open Project on the Workflow bar.

4. Select New Project.

5. Double click Create Project icon in the New Project dialog box

6. One the Create Project dialog box, enter your project name, and your project description.

7. Click OK.

8. SilkTest will create a new subdirectory under SilkTest project directory, and save all files related to the new project under that subdirectory.

11. How to open an existing SilkTest project?

1. Run SilkTest.

2. Select File menu.

3. Select Open Project.

4. Select the project.

5. Click OK.

6. SilkTest will open the selected project.

12. What is a SilkTest Testplan?

The SilkTest testplan is an outline that provides a framework for the software testing process and serves as the point of control for organizing and managing your test requirements. A testplan consists of two distinct parts: an outline, which is a formatted description of the test requirements, and statements, which are used to connect the testplan to SilkTest scripts and testcases that implement the test requirements.

13. Where is a testplan stored?

A SilkTest testplan is stored in a file with .pln file extension.

14. How to create and edit a testplan?

1. Make sure your project is open.

2. Click the Files tab in the Project Explorer.

3. Right-click the Plan folder.

4. Click New File.

5. An untitled testplan file opens in the SilkTest testplan editor.

6. Click File/Save menu to save the testplan.

15. What are the types of text lines in a testplan file?

A testplan file contains text lines. There are 5 types of text lines in a testplan file:

1. Comment - Marked in green color: Providing commentary information.

2. Group descriptiton - Marked in black color: Providing descriptions for groups of tests. Tests in a testplan can be grouped into multiple levels of groups.

3. Test description - Marked in blue color: Providing descriptions for individual test.

4. Testplan statement - Marked in dark red color: Providing relations to link scripts, testcases, test data, closed sub testplans or an include file to the testplan.

5. Open subplan file marker - Marked in magenda color: Providing relations to link sub testplans to be included in a master testplan.

16. How to create group and sub group descriptions in a testplan?

In a testplan, each text line starting from column 0 represents a top level group description. To create sub group description:

1. Move the cursor the next line below the top level group description.

2. Click Outline/Move Right.

3. The text line will be indented to the right to be come a sub group description.

17. What are testplan attributes?

Testplan attributes are user defined characteristics to be associated with test group descriptions and/or test descriptions. You search, identify, and/or report test cases based on values of the different attributes.

18. What are the default testplan attributes?

SilkTest offers you 3 predefined default attributes:

1. Category: The type of testcase or group of testcases. For example, you can use this attributes to categorize your test groups as "Boundary value tests", "Navagation tests", etc.

2. Component: The name of the application modules to be tested.

3. Developer: The name of the QA engineer assigned to develop the testcase or group of testcases.

19. How to define new testplan attributes?

1. Make sure your test project is open.

2. Click Testplan/Define Attributes menu. The Define Attributes dialog box shows up. You should see predefined default attributes: Category, Component, and Developer.

3. Click the New button. The New Attribute dialog box shows up.

4. Enter a name for your new attribute. For example: "Level" to indicate the complexity level of test cases.

5. Select an attribute type: Normal, Edit, or Set.

6. Click OK.

20. How to define values for a testplan attribute?

You must define values for a testplan before using it:

1. Make sure your test project is open.

2. Click Testplan/Define Attributes menu. The Define Attributes dialog box shows up. You should see predefined default attributes and other attributes defined by yourself.

3. Select an attribute. For example, "Component". The Values box should be empty.

4. Enter a value in Add box. For example, "Catalog".

5. Click Add. Value "Catalog" should be inserted into the Values box.

6. Repeat the last two steps to add more values.

20. Where are the testplan attributes stored?

Testplan attributes are stored in the testplan initialization file, testplan.ini, in SilkTest installation directory.

21. How to assign attribute values to test cases?

1. Make sure your testplan is open.

2. Click on the test case for which you want to assign an attribute value.

3. Click Testplan/Detail menu. The Testplan Details dialog box shows up.

4. Click the Test Attribute tab.

5. Click the Component field. The dropdown list shows up with all values of "Component".

6. Select one of the values in the dropdown list.

7. Click OK.

22. What is a test frame?

A test frame is a file that contains information about the application you are testing. Information stored in a test frame will be used as references when SilkTest records and executes testcases. A test frame is stored in an include file with file extension .inc.

23. How to create a test frame?

1. Make sure your Web browser is active and showing your Web application home page. Do not minimize this Web page window.

2. Make sure your test project is open.

3. Click File/New menu. The New dialog box shows up.

4. Select the Test Frame radio button.

5. Click OK. The New Test Frame dialog box shows up with a list all active Web applications.

6. Select your Web application.

7. Enter a test frame name. For example: HomeFrame.inc.

8. Review the window name. It should be the HTML title your Web application. You can rename it, if needed.

9. Click OK to close the New Test Frame dialog box.

10. Click File/Save menu.

24. What is stored in a test frame?

A test frame is a text file, which records the following types of information for a Web application:

1. Comment: Commentary information.

2. wMainWindow: A string constant to identify your application's home page.

3. Home page window: An object of class BrowserChild window that holds application home page.

4. sLocation: The URL of the your application's home apge.

5. sUserName and dPassword: User name and password if needed to login to your Web application.

6. BrowserSize: A pair of values to indicate the size of the browser window.

7. Home page objects: A list of all objects on the home page, such as HtmlImage, HtmlText, HtmlLinks, etc.

25. How DOM browser extension identify a Web application UI object?

A Web application UI object is identified in two parts:

1. Identify the Web browser window where the Web application is running. For example, a Web browser window can be identified as"Browser.BrowserChild("Yahoo Home Page")".

Another Web browser window can be identified as"Browser.BrowserChild("Google Home Page")".

2. Identify the Web UI object based on the HTML element that represents the UI object.

For example, an image in a Web page can be identified as "HtmlImage("Yahoo Logo")";

A hyperlink in a Web page can be identified as "HtmlLink("Site Map")"; The full identification of a Web applicatin UI object is the concatenation of the browser window identification and the HTML element identification. For example, the Yahoo logo image is identified as:

Browser.BrowserChild("Yahoo Home Page").HtmlImage("Yahoo Logo"). The site map link is identified as: Browser.BrowserChild("Google Home Page").HtmlLink("Site Map").

26. What is the syntax of UI object identifier used by DOM extension?

The DOM browser extension uses the following syntax for Web UI objects:

Browser.BrowserChild("page_title").html_class("object_tag")

1. "page_title" is the title of the Web page, defined by the HTML "TITLE" tag.

2. "object_tag" is the label of the HTML element. How a HTML

element is labeled depending on the type of HTML element.

27. What is multi-tagging?

Multi-tagging is a technique used by the DOM browser extension to identify a Web page UI object. Whenever possible, DOM extension inserts more than one tag into the object identifier in following format:

Browser.BrowserChild("page_title").html_class("caption_tag|#index_tag|window_tag")

1. "caption_tag" is the caption of the HTML element.

2. "#index_tag" is the index of this HTML element, counting from the beginning of this page of the same class of HTML elements.

3. "window_tag" is the window identifier.

28. How to add objects of other pages to a test frame?

If your Web application has pages other than the home page, you should also record their page objects into the test frame:

1. Make sure your Web browser is active and showing another page of your Web application.

2. Make sure SilkTest is running.

3. Click File/Open menu.

4. Select your test frame file. For example: HomeFrame.inc.

5. Click OK to open the test frame.

6. Click Record/Window Declarations menu. The Record Window Declarations dialog box shows up. 7. Click your Web application window. Web page objects are recorded in the Record Window Declarations dialog box.

8. Press Ctrl+Alt to pause the recording.

9. Click "Paste to Editor" button. All recorded objects will be inserted into the test frame.

10. Repeat this for other Web pages, if needed.

29. How to specify a browser extension to a Web application?

1. Run SilkTest.

2. Open Internet Explorer (IE).

3. Enter the URL of the Web application.

4. Leave the IE window with the Web application. Don't minimize the IE window.

5. To back to SilkTest window.

6. Select Basic Workflow bar.

7. Click Enable Externsions on the Workflow bar.

8. The Enable Extensions dialog will show up. Your Web application running in the IE window will listed in the dialog box.

9. Select your Web application and click Select.

10. The Extension Settings dialog will show up. Click OK to enable the DOM browser extension.

30. What is DefaultBaseState?

The DefaultBaseState is a starting point of test project from which the Recovery System can automatically restart your test cases when test cases fail to continue.

How to test your DefaultBaseState?

1. Close your Web application and other Web browsers.

2. Make sure your test frame is open.

3. Click Run/Application State menu. The Run Application State dialog box shows up with a list of states. One of them should be DefaultBaseState.

4. Select DefaultBaseState.

5. Click Run button. The Runtime Status dialog box shows up. And the Results File dialog box shows up too.

6. You should see no error message in the results file.

31. What are the important aspects of a test case?

1. Each test case must be independent of other test cases.

2. Each test case have a single test purpose.

3. Each test case should start from a base state and returning to the same base state.

32. What is the standard flow of execution of a test case?

1. Starting from the base state.

2. Drive the application to the state where the expected result should occur.

3. Verify the actual result against the expected result.

4. Declare the test case as passed or failed.

5. Return to the base state.

33. How to record a test case?

1. Run SilkTest.

2. Click Option/Runtime menu. The Runtime Options dialog box shows up.

3. Edit the Use Files field to include your test frame file and the exlorer.inc file. For example: ...\HomeFrame.inc,extend\explorer.inc.

4. Make sure IE 5.x DOM is selceted.

5. Click OK to cloase the Runtime Optoins dialog box.

6. Open your test project.

7. Click Record/Testcase menu. The Record Testcase dialog box shows up.

8. Name your test case. For example: LoginTest.

9. Select DefaultBaseState in the Applicatin State dropdown list.

10. Click Start Recording button.The Record Testcase dialog closes. Your Web application is will be automatically started by SilkTest, based on the information in test frame file. SilkTest Editor window closes. The Record Status dialog box shows up.

11. Continue to use your Web application. SilkTest records everything you did on your application.

12. Click the "Done" button on the Recording Status dialog box to stop recording. The Recording Status dialog box closes. The Record Testcase dialog box shows up again.

13. Click Paste to Editor. SilkTest will insert the recorded acitivities as 4Test statements into a script file. The Record Testcase dialog closes.

14. Click File/Save menu to save the script file. You can enter a script file name. For example, LoginTest.t.

34. How to include a test case into a testplan?

1. Make sure your testplan is open.

2. Enter a test description into your testplan. For example, "Test login process".

3. Select this test description.

4. Click Testplan/Detail menu. The Testplan Detail dialog box shows up.

5. Click the Test Execution tag on the Testplan Detail dialog box.

6. Click the "Scripts" button to browse and select a test case script file. For example, LoginTest.t.

7. Click the "Testcases" button, to select a testcase recored in the specified script file.

8. Click OK to close the Testplan Detail dialog box.

35. How record a test case into a testplan automatically?

Test cases can recorded first without a testplan, then included into a testplan later. Test cases can also be recorded into a testplan directly:

1. Make sure your testplan is open.

2. Enter a test descripption into your testplan. For example, "Test change password".

3. Select this test description.

4. Click Record/Testcase menu.

5. Enter a name for the script file.

6. Click Open. The Record Testcase dialog box shows up.

7. Enter a testcase name in the Testcase Name field.

8. Select DefaultBaseState in the Applicatin State dropdown list.

9. Click Start Recording button.The Record Testcase dialog closes. Your Web application is will be automatically started by SilkTest, based on the information in test frame file. SilkTest Editor window closes. The Record Status dialog box shows up.

10. Continue to use your Web application. SilkTest records everything you did on your application.

11. Click the "Done" button on the Recording Status dialog box to stop recording. The Recording Status dialog box closes. The Record Testcase dialog box shows up again.

12. Click Paste to Editor. SilkTest will insert the recorded acitivities as 4Test statements into a script file. The Record Testcase dialog closes.

13. Click File/Save menu to save the script file. You can enter a script file name. For example, ChangePasswordTest.t.

36. How to define an object verification in a test case?

While recording a test case, you can define verification points to verify UI objects:

1. Make sure you are in the process of recording a testcase.

2. Make sure the Record Status dialog box is on the screen.

3. Make sure your recording reached the Web page that has the UI object you want to verify.

4. Click the background (blank area) of the Web page. Do not click any objects on the page.

5. Press Ctrl-Alt. The Verify Window dialog box shows up. All the objects on the current Web page are listed on the Verify Window dialog box.

6. Select the object to be verified in the object list. Un-select all other objets.

7. Select the property to be verified in the property list. Un-select all other properties.

8. Click OK to close the Verify Window dialog box.

9. Continue your recording.

37. How to run a test case from a test script file?

A test script file can store multiple test cases. You can run a testcase from a test script file:

1. Open the test script file.

2. Select the test case in the test file.

3. Click Run/Testcase menu. The Run Testcase dialog box shows up.

4. Click the Run button. SilkTest starts to run the test case.

5. Do not touch mouse or keyboard, to avoid interrupting the test case execution.

6. SilkTest finishes executing the testcase. The Restuls window shows up with the execution result.

7. Review the execution result.

38. How to run a test case from a testplan file?

If a testcase is linked to a testplan, you can run it from the testplan:

1. Open the testplan.

2. Select the test description line which has the testcase linked.

3. Click Run/Testcase menu. The Run Testcase dialog box shows up.

4. Click the Run button. SilkTest starts to run the test case.

5. Do not touch mouse or keyboard, to avoid interrupting the test case execution.

6. SilkTest finishes executing the testcase. The Restuls window shows up with the execution result.

7. Review the execution result.

39. How to run all test cases in a testplan? 1. Open the testplan.

2. Click Run/Run All Tests menu. SilkTest starts to run all the test cases in the testplan.

3. Do not touch mouse or keyboard, to avoid interrupting the test case execution.

4. SilkTest finishes executing the testcase. The Restuls window shows up with the execution result.

5. Review the execution result.

40. How to select a group of test cases in a testplan to run?

Usually, a testplan contains a big number of test cases. For some reason, you don't want to run all test cases in the testplan. You want to select a group of test cases and run them:

1. Open the testplan.

2. Select the test description line (linked to the testcase) to mark.

3. Click Testplan/Mark menu. The selected test description line is marked.

4. Repeat this process to select more linked testcases.

5. Click the Run/Run Marked Tests menu. SilkTest runs all the marked testcases.

6. Do not touch mouse or keyboard, to avoid interrupting the test case execution.

7. SilkTest finishes executing the testcase. The Restuls window shows up with the execution result.

8. Review the execution result.

Exercise for You

(Model questions of certification program)

Quality Assurance or Quality Control?

A management activity, frequently performed by a staff function

a) Quality Assurance

b) Quality Control[pic]

[pic]

Concerned with all of the products that will ever be produced by a process (not just one project)

a) Quality Assurance

b) Quality Control[pic]

[pic]

The responsibility of the team/workers

a) Quality Control

b) Quality Assurance[pic]

[pic]

Identifies defects for the primary purpose of correcting defects

a) Quality Control

b) Quality Assurance[pic]

[pic]

Establishes (or helps to establish) processes

a) Quality Control

b) Quality Assurance[pic]

[pic]

Concerned with a specific product or project

a) Quality Control

b) Quality Assurance[pic]

[pic]

Sets up measurements programs to evaluate process effectiveness

a) Quality Control

b) Quality Assurance[pic]

[pic]

Verifies whether specific attributes are in, or not in, a specific product or service.

a) Quality Assurance

b) Quality Control[pic]

[pic]

Identifies weaknesses in processes and improves them.

a) Quality Assurance

b) Quality Control[pic]

[pic]

Evaluates the life cycle itself

a) Quality Control

b) Quality Assurance[pic]

[pic]

Makes sure bugs don't happen again on other projects

a) Quality Assurance

b) Quality Control[pic]

[pic]

Relates to a specific product or service

1 Quality Assurance

2 Quality Control

Test Principles and Concepts

Top of Form

[pic]

Which of these is NOT part of the Cost of Quality?

a) Failure costs

b) Error Detection costs

c) RTF costs

d) Error prevention costs[pic]

[pic]

The term "fit for use" refers to whose view of quality?

a) Supplier's

b) Auditor's

c) Producer's

d) Customer's[pic]

[pic]

Which of these is a challenge/obstacle to implementing testing in an organization?

a) People think testing is not essential for delivery; testing is often unstructured and subjective

b) Testing is error prone; testing is too expensive

c) Testing is often not managed properly; the mindset that you can test quality into software

d) All of the above[pic]

[pic]

"PDCA", or "Plan, Do, Check, Act" is also known as

a) The Shewhart cycle

b) The Deming Wheel

c) both

d) neither[pic]

[pic]

What are the three main categories defects can fall into?

a) Wrong, Missing, and Extra

b) Regression, Unit, and Integration

c) FUBAR, really FUBAR, and uber-FUBAR

d) display, processing, output[pic]

[pic]

Which of these is NOT an important part of a Test Strategy?

a) [pic]The procedure for logging defects

b) how you will validate that the system meets user requirements

c) how you will validate the software at each stage of development

d) how you will use test data to examine the behavior of the system

[pic][pic]

Which of these are good ways to raise management's awareness of the importance of testing?

a) Make sure management knows what they get for the money they spend on testing

b) relay other benefits of testing (shorter test times, higher quality software)

c) Collect and distribute articles about testing

d) all of the above[pic]

[pic]

Continuous process improvement only works if you:

a) Monitor performance of prior improvement initiatives

b) base improvement efforts on the organization's needs and business goals

c) enforce the newly developed processes

d) all are needed[pic]

[pic]

Which is a more effective approach to risk mitigation?

a) Test based on user requirements

b) Test based on system specifications

c) Test more heavily in areas deemed higher risk

d) all of the above[pic]

[pic]

Establishing a testing policy should include four main criteria. A definition of testing, the testing system, evaluation methods, and ________?

a) Standards against which testing will be measured

b) Data Requirements for a typical project

c) Templates for deliverables generated by the test group

d) All of the above[pic]

[pic]

Which of these is the most effective method for establishing a testing policy?

a) Information Services Consensus Policy

b) Users Meeting

c) Industry Standard

d) Management Directive[pic]

[pic]

During what phase of the project should you START thinking about Testing Scenarios?

a) Requirements

b) Maintenance

c) Coding

d) Design[pic]

[pic]

What is the appropriate timing for Static, Structural tests?

a) Analysis and design only

b) testing phase

c) as early as possible and in every phase thereafter

d) not until the coding phase[pic]

[pic]

Test coverage tools are useful starting in which phase?

a) Any phase

b) Analysis and Design

c) Testing (functional testing)

d) Coding (unit testing)[pic]

A Test Factor is:

a) The risk or issue that needs to be addressed during testing

b) a variable in a test script

c) an error inserted by programmers to measure the effectiveness of testing

d) part of the audit trail[pic]

[pic]

________ means that the data entered, processed, and output by the application is accurate and complete.

a) Completeness

b) File integrity

c) Correctness

d) Audit trail[pic]

[pic]

Validating that the right file is used and the data on that file is correct and in the correct sequence is known as:

a) Structural analysis

b) Data dictionary validation

c) Black box testing

d) File integrity testing[pic]

[pic]

_____________ can substantiate the processing that has occurred and allow analysts to reconstruct what happened at any point in a transaction.

a) Continuity of Processing

b) None of these

c) Data Dictionaries

d) Audit Trails[pic]

[pic]

Failover testing at Chase is verification that ___________ is intact

a) Continuity of processing

b) audit trails

c) maintainability

d) both A and C[pic]

[pic]

Processing time and "up time" goals are examples of

a) Statistical process control

b) metrics

c) service levels

d) workbenches[pic]

[pic]

Access control testing is also known as

a) Error handling

b) Security testing

c) Static testing

d) Fault based testing

[pic]

Audits to ensure that the system is designed in accordance with organization strategy, policies, procedures, and standards are designed to test which quality factor?

a) Best practices

b) correctness

c) maintainability

d) compliance[pic]

[pic]

An application which performs its intended function with the required precision over an extended period of time has this quality factor:

a) Validity

b) ease of use

c) service level

d) reliability[pic]

[pic]

If it is difficult to locate and fix errors in a program, that program is missing which quality factor?

a) Coupling

b) maintainability

c) portability

d) ease of operation[pic]

[pic]

The quality factor "ease of use" is best measured by doing what type of testing?

a) Performance test

b) manual support test

c) white box test

d) acceptance test[pic]

[pic]

Portability refers to

a) How easy it is to transfer a program to other hardware or OS

b) whether it's possible to uninstall the program

c) both of these

d) neither of these[pic]

[pic]

The effort required to interconnect components of an application to other applications for data transfer is:

a) Coupling

b) I/O

c) flowchart analysis

d) system testing[pic]

[pic]

If defects are hard to find, that's a sign that...

a) Test coverage is inadequate

b) there are errors in the test scripts

c) neither of these

d) either of these[pic]

Which way(s) can lower the cost of testing without reducing it's effectiveness?

a) Phase containment

b) the V concept of testing

c) testing the artifacts of each development phase, not just the program itself

d) all of the above[pic]

[pic]

________ ensures that we designed and built the right system, while _________ ensures that the system works as designed.

a) Validation, verification

b) white box testing, black box testing

c) verification, validation

d) the project manager, the test team[pic]

[pic]

Walkthroughs, peer reviews, and other structural tests tend to be __________ tasks

a) Verification

b) quality control

c) quality assurance

d) validation[pic]

[pic]

Input, Procedures to DO, Procedures to CHECK, and Output are the four components of the:

a) Workbench concept

b) test phase

c) reporting cycle

d) Shewhart cycle[pic]

[pic]

Which of these are considerations when developing testing methodologies?

a) The type of development project, the type of software system

b) tactical risks, project scope

c) both of these

d) neither of these[pic]

[pic]

Deriving test cases based on known program structure and logic is...

a) Control testing

b) white box testing

c) black box testing

d) fault-driven testing[pic]

[pic]

Functional testing that simulates the end user's actions is known as...

a) Access control testing

b) performance testing

c) black box testing

d) structural testing[pic]

Breaking variables into chunks that should have the same expected results is:

a) Workbench

b) usability testing

c) equivalence partitioning

d) data mapping[pic]

[pic]

Developing tests based on where you think the program's weak points are is known as

a) Error-handling testing

b) risk based testing

c) error guessing

d) negative testing[pic]

[pic]

Top-down and Bottom-up are two ways to approach

a) Incremental testing

b) thread testing

c) boundary analysis

d) none of these[pic]

[pic]

An end-to-end, task based test that uses integrated components of a system is a...

a) Performance test

b) thread test

c) unit test

d) structural test[pic]

[pic]

Which is a benefit of having a user committee develop the company's test policy?

a) All involved parties participate and sign off

b) outside users learn the options and costs associated with testing

c) testing and quality are seen as organization-wide responsibilities, not just IT

d) all of these[pic]

[pic]

Which of these types of reviews is NOT an effective phase containment mechanism?

a) Post implementation review

b) decision point review

c) phase end review

d) in process review[pic]

[pic]

Which of these was not a Quality professional?

a) Deming

b) Townsend

c) Juran

d) Pareto

[pic]

The successful implementation of a quality improvement program will have what long-term effect on productivity?

a) There is no relationship between quality and productivity

b) it will lower productivity

c) it will raise productivity

d) productivity will remain the same[pic]

[pic]

Which of these are some of Deming's 7 Deadly Management Diseases?

a) Lack of constancy of purpose and emphasis on short-term profits

b) evaluation of performance or annual review of performance

c) running an organization on visible figures alone and excessive costs of warranty

d) all of these[pic]

[pic]

The scientific method is

a) Relevant to developers

b) a logical, systematic approach to processes

c) useful when developing test cases

d) all of these[pic]

[pic]

Deming's biggest contribution to the quality profession is:

a) Zero Defects Day

b) Using slogans and targets to motivate the work force

c) focusing on process improvement, not the product itself

d) all of these[pic]

[pic]

Instituting pride in workmanship and eliminating numerical quotas were suggestions for management created by

a) Pareto

b) Shewhart

c) Deming

d) Juran[pic]

[pic]

Which is an important source of data for Continuous Process Improvement?

a) Defect databases

b) Post-mortems

c) Neither of these

d) both of these[pic]

[pic]

Who should NOT participate in testing?

a) End users

b) developers’

c) management

d) all should participate[pic]

According to Deming, 90% of defects are attributable to:

a) Vague user requirements

b) programmer error

c) process problems

d) communication errors[pic]

[pic]

Which of these are NOT test factors

a) Error guessing, incremental testing, boundary analysis

b) compliance, reliability, access control

c) correctness, audit trail, continuity of processing

d) maintainability, portability, ease of operation[pic]

[pic]

A test strategy matrix does not include:

a) Test phases

b) risks

c) test factors

d) script mapping[pic]

[pic]

Of the four options listed below, which is able to detect an infeasible path?

a) Performance testing

b) black box testing

c) structural testing

d) manual support testing[pic]

[pic]

Black box testing, thread testing, and incremental testing are all kinds of:

a) Dynamic testing

b) static analysis

c) both of these

d) neither of these[pic]

[pic]

Which catches errors earlier, verification or validation?

a) Verification

b) Validation

c) Neither

d) Both[pic]

[pic]

Mapping requirements to tests in order to prove that the system functions as designed is known as:

a) Data mapping

b) requirement validation

c) requirements tracing

d) metrics gathering

Who should not be allowed to participate in and contribute to the improvement of processes?

a) Line workers

b) nobody - all should be allowed

c) the author of the original process

d) management[pic]

[pic]

Improving weaknesses in a process you just piloted is an example of which piece of the Shewhart cycle?

a) Do

b) Act

c) Plan

d) Check

Tester's Role

[pic]

Which document contains both technical and non-technical instructions to users during setup?[pic]

a) Operations manual

b) user's manual

c) system specifications

d) installation and conversion plan[pic]

[pic]

Which is true of Security and Internal Control-related specs?

a) They must be written in a testable manner

b) They should be part of a living document that is updated during each phase

c) they must have signoff from security experts and testers

d) all of the above[pic]

[pic]

A short, preliminary document that states deficiencies in existing capabilities, new or changed requirements, or opportunities for increased economy or efficiency is known as a:

a) Statement of requirements

b) needs statement

c) change control

d) none of these[pic]

[pic]

The test analysis and security evaluation report should include all of these EXCEPT:

a) Security evaluation sub-report

b) the capabilities and deficiencies

c) project signoff by all team members

d) Documents the test results and findings[pic]

[pic]

The document that includes the physical characteristics for storage and design is...

a) System decision paper

b) specifications document

c) requirements document

d) data dictionary[pic]

[pic]

A Feasibility Study should include:

a) Analysis of the objectives, requirements, and system concepts

b) evaluation of alternative approaches for achieving the objectives

c) identification of a proposed approach

d) all of the above[pic]

[pic]

_________ Tests tend to uncover errors that occur during coding, while _______ tests tend to uncover errors that occur in implementing requirements or design specs.

a) Manual, automated

b) structural, functional

c) automated, manual

d) functional, structural

[pic]

In the ___________, specific types of sensitive data are identiied, and the degree and nature of the sensitivity is outlined.

a) Data sensitivity/criticality description

b) data requirements document

c) functional specification

d) risk mitigation plan[pic]

[pic]

The __________ identifies internal control and security vulnerabilities, the nature and magnitude of associated threats, potential forloss, and recommended safeguards. This should be a living document that is updated during each phase.

a) Statement of requirements

b) technical specification

c) needs statement

d) risk document[pic]

[pic]

Which item(s) should be present in a VV&T plan?

a) Detailed description of procedures

b) test data

c) evaluation criteria

d) All of the above[pic]

[pic]

The functional/security specifications need to be approved...

a) During requirements

b) prior to the start of coding

c) only on high risk projects

d) none of these[pic]

[pic]

Which is NOT true of the User Manual?

a) It should be written in non-technical terms

b) it contains a full description of how to use the application

c) it describes when to use the application

d) these are all true

[pic]

[pic]

Which of these are examples of structural system testing techniques?

a) Stress testing, execution testing, recovery testing, operations testing, compliance testing, security testing

b) Black box testing, requirements-based testing, usability testing

c) Compliance testing, requirements-based testing, black-box testing

d) coverage analysis, security testing, performance testing[pic]

[pic]

The Cost Benefit Analysis...

a) Provides stakeholders with adequate cost and benefit information.

b) Might be a separate doc, or might be part of the feasibility study.

c) Neither of these

d) Both of these[pic]

[pic]

The __________ is like a user manual but is more technical and geared toward Operations Personnel

a) Operations Manual

b) Technical Manual

c) Help System

d) none of these[pic]

The document that contains the definition of what is to be produced - including operating environment and development plan, proposed methods and procedures is...

a) Project plan

b) test plan

c) system specification

d) requirements document[pic]

[pic]

Identifying important quality factors involves...

a) Considering the basic characteristics of hte application

b) considering the life cycle implications and performing trade-offs among the tentative list of quality factors

c) ranking all quality factors and noting the most important ones

d) all of the above[pic]

[pic]

A System Decision Paper...

a) Records essential information such as mission need, milestones, threshholds, issues and risks, etc.

b) is the master document related tot he whole SDLC.

c) Will eventually include signoffs that the app is operating properly

d) all of the above[pic]

[pic]

The ideal software testing process involves testing artifacts produced during development, such as requirements documents and design specifications.

a) True

b) False[pic]

[pic]

Determining how the system functions when subjected to large volumes is known as:

a) Data-driven testing

b) Disaster testing

c) Performance testing

d) Stress testing

[pic]

The document that includes all goals and activities for all phases, including resource estimates, methods for design, documentation, etc. through to installation and operation, is...

a) None of these

b) the system decision paper

c) the project plan

d) the risk analysis

[pic]

Test Management

Which is true of Suggestion Day?

a) It is a full staff offsite meeting to focus on quality improvement

b) all staff meets in one room to discuss all suggestions on the table

c) it uses the pare to principle to guide its agenda

d) It was proposed by Deming[pic]

[pic]

Which of these is a "DON'T" when giving criticism?

a) Have the facts and be specific on expectations

b) be prepared to help them improve

c) do it in private

d) state the consequences if they do not improve[pic]

[pic]

Which is not a good way to combat Groupthink?

a) Assign someone in the group to play Devil's advocate

b) use outside experts as resources

c) inform the group that the meeting will last until a unanimous decision is reached

d) break the team up into subgroups[pic]

[pic]

When delivering an Oral System Proposal, you should emphasise your enthusiasm to do the project, but not emphasize the expertise of yourself and your staff.

a) True

b) False[pic]

[pic]

The formalized method of brainstorming, having a question and answer period, ranking solutions, and then thinking critically about them is known as:

a) Suggestion day

b) nominal group technique

c) continuous process improvement

d) affinity diagramming[pic]

[pic]

Getting on the customer's wavelength, getting the facts, taking notes, establishing an action program, and taking action are the recommended steps of...

a) Groupthink

b) requirements gathering

c) task force management

d) conflict resolution[pic]

[pic]

The content of a System Proposal should include the people costs and the effects on other systems, as well as the standard cost benefit analysis.

a) True

b) False

[pic]

It is acceptable to close the System Proposal without discussing everything on the agenda if the audience shows signs that they will approve the project.

a) False

b) True

[pic]

Task forces should tackle more than one issue at a time.

a) False

b) True[pic]

[pic]

The symptoms below refer to what phenomenon? - collective efforts to rationalize/discount negative info - tendency to ignore ethical or moral consequences to group decisions - stereotyped views of other groups - active pressure to change views

a) Loyalty

b) groupthink

c) an us vs them mentality

d) newspeak[pic]

[pic]

Which one of these is NOT one of the three components of receiving information?

a) Responding to the speaker

b) attending to the speaker

c) hearing the speaker

d) understanding the speaker[pic]

[pic]

The ideal size for a task force is:

a) 3-8 members

b) two people from each group that will be impacted by the decision

c) as many members as are interested in participating

d) 5 members[pic]

[pic]

Therapeutic listening is:[pic]

a) Sympathetic, empathetic, and helps gain speaker's confidence

b) useful in conflict resolution

c) helpful in understanding the reasons why events occurred

d) all of the above[pic]

[pic]

If a person says "Sure, we will make the deadline" but has a smirk on their face, which two information channels are out of sync, and which should you pay more attention to?

a) Graphic channel and information channel - information channel

b) body channel and verbal channel - body channel

c) verbal channel and graphic channel - verbal channel

d) information channel and body channel - body channel[pic]

[pic]

When trying to piece together facts from several different people, _________ listening is important.

a) Therapeutic

b) comprehensive

c) critical

d) appreciative[pic]

[pic]

Which type of listening is typically not needed during a verbal walkthrough (Unless Phyllis is on the call)?

a) Critical listening

b) Comprehensive listening

c) Discriminative listening

d) Appreciative listening[pic]

Which of these is NOT a good way to portray a successful image?

a) Being purposeful in body language, making eye contact

b) being analytical and decisive

c) being brief, specific, and direct in conversation

d) not engaging in casual conversation with management[pic]

[pic]

It is a good idea to give management regular updates on the current status of task force activities.

a) No, because the only output of the task force should be the final report, a unanimous decision

b) yes, because they need to approve further funding

c) no, because they will take credit for the task force's work

d) yes, because they assigned you to the task force[pic]

[pic]

____________ is selective listening to determine what you are looking to hear, weeding out the rest

a) Critical listening

b) Comprehensive listening

c) Discriminative listening

d) Appreciative listening[pic]

[pic]

Which of these is NOT an important part of initiating an action program for conflict resolution?

a) Admitting the error if you are responsible

b) taking action immediately

c) reporting the conflict to governance bodies

d) stating the solution and getting agreement on it[pic]

[pic]

When group members experience strong feelings of solidarity and loyalty, the desire for unanimity may override the motivation to logically and realistically evaluate alternative courses of action...

a) it is usually BAD and leads to bad decisions.

b) it is called groupthink

c) both of these

d) neither of these[pic]

[pic]

In order to get the "whole" message with minimal distortion, _______ listening is necessary.

a) Discriminative

b) Therapeutic

c) Comprehensive

d) Critical

[pic]

Build the Test Environment

Top of Form

[pic]

________ and ________ are used within individual workbenches to produce the right output products.

a) Tools and techniques

b) procedures and standards

c) none of these

d) processes and walkthroughs[pic]

[pic]

The Standards Manager's role(s) include:

a) Training the standards committee

b) promoting the concept of standards

c) choosing members for ad hoc committees

d) a and b only[pic]

[pic]

A Toolsmith is:

a) Always the head software engineer

b) the person with specialized knowledge of the tool being introduced

c) both of these

d) neither of these[pic]

[pic]

Whose responsibility is it to set policies?

a) Testers

b) Senior management

c) Lead technical staff

d) Independent audit boards[pic]

[pic]

Which of these is typically NOT the role of management during tool selection?

a) Identify tool objectives

b) define selection criteria

c) make the final selection of the tool or the source

d) prepare a ranked list of selection choices[pic]

[pic]

If procedures and standards are working properly, they will:

a) Reduce the cost of doing work

b) increase quality

c) make budgets and schedules more predictable

d) all of the above[pic]

[pic]

The standards program is driven by management policies, but workers should develop their own standards and procedures.

a) False

b) True[pic]

[pic]

The software engineer's role in tool selection is...

a) To identify, evaluate, and rank tools, and recommend tools to management

b) to determine what kind of tool is needed, then find it and buy it

c) to initiate the tool search and present a case to management

d) none of these

The highest level manager from each relevant group should be asked to join the standards committee

a) true

b) false[pic]

[pic]

A standards program should:

a) Be able to enforce standards

b) roll out ad hoc committees to develop standards

c) both of these

d) neither of these[pic]

[pic]

A valid risk to consider during tool selection is:

a) Difficult to use

b) requires excessive computer resources

c) lacks adequate documentation

d) all of the above[pic]

[pic]

A _______ is the step-by-step method followed to ensure that standards are met

a) SDLC

b) Project Plan

c) Policy

d) Procedure[pic]

[pic]

It is advisable to follow the QAI- recommended set of steps during tool selection because...

a) A methodical process will help avoid the pitfalls of "just picking one"

b) it assigns a toolsmith who can troubleshoot or tune the tool to the organization's needs

c) both of these

d) neither of these[pic]

[pic]

A _______ is the emasure used to evaluate products and identify nonconformance.

a) Policy

b) standard

c) procedure

d) mission[pic]

[pic]

The _______ should accept topics for standards, but the _________ should develop individual technical standards.

a) Standards committee, ad hoc committee

b) standards manager, ad hoc committee

c) standards committee, standards manager

d) standards manager, standards committee[pic]

[pic]

The ________ defines quality as "meeting the requirements"

a) Producer

b) developer

c) consumer

d) tester

[pic]

When developing ___________, it is important to ask these questions: - is every organizational function involved - are activities having a vested interest involved, such as authors

a) Developing standards

b) conducting walkthroughs

c) creating test cases

d) all of the above[pic]

[pic]

A ________ is the managerial desires and intents concerning either process (intended objectives) or products (desired attributes)

a) Policy

b) procedure

c) standard

d) vision[pic]

[pic]

When implementing a new tool, it is a good idea to assign one person full time to the tool's deployment.

a) True

b) False[pic]

[pic]

When doing an informal tool acquisition, it is still necessary to write an RFP.

a) True

b) False[pic]

Test Design

Top of Form

[pic]

A good test suite will include the following test(s)

a) Tests of normally occurring transactions

b) tests using invalid inputs

c) tests that violate established edit checks

d) A and B only

e) A, B, and C[pic]

[pic]

Before performing volume testing, it is important to challenge each input and output data element, to determine potential limitations, and then document those limitations.

a) True

b) False[pic]

[pic]

Before selecting which conditions you are going to include in your test suite, you should rank the test conditions according to risk.

a) Always true

b) True if there is a limited test budget

c) False[pic]

[pic]

Analysis of test results should include...

a) System components affected

b) terminal and onscreen outputs

c) order of processing

d) compliance to specs

e) all of the above[pic]

[pic]

Assuming you could reach 100% of any of the coverage types below, which one would leave the MOST potential for unexecuted code (and therefore, undiscovered errors)?

a) Modified decision coverage

b) global data coverage

c) statement coverage

d) branch coverage

e) decision/condition coverage[pic]

[pic]

When evaluating whether a test suite is complete (eg, doing a peer review or inspection), Which of the following should NOT be considered?

a) Whether the scripts have appropriate sign on and setup procedures

b) Whether the scripts address all items on each (onscreen) menu

c) Whether the scripts include data setup and other prerequisites

d) Whether the scripts test single transactions, multiple transactions, or both

e) all should be considered[pic]

[pic]

When performing volume testing, a small percentage of invalid data should be used.

a) True

b) False

A Pseudoconcurrency test is a test that validates...

a) Data security when two or more users access the same file at the same time

b) file integrity

c) volume testing

d) A and B only

e) A and C only[pic]

[pic]

Which of these is not a good use of a code coverage analysis?

a) Simply to measure how well your test cases cover the code

b) analyze test coverage against system requirements

c) find "holes" in your testing and develop new test cases as supplements

d) start with black box testing, measure coverage, and use white box testing to test the remainder

e) All are good uses[pic]

[pic]

Stop procedures are important in scripting because...

a) The person executing hte script needs to know what kinds of errors would invalidate hte rest of the script

b) the person running the script needs to know if they can pick up the script on a later step after logging a bug

c) both of these[pic]

[pic]

Regardless of whether a script is manual or automated, it is important to consider:

a) Environmental constraints

b) think time

c) file states/contents

d) processing options

e) all of the above[pic]

[pic]

When developing your test suite, you should NOT:

a) Use transactions with a wide range of valid and invalid input data

b) use all forms of documentation to guide test cases and the data associated with them

c) start by testing one data point at a time so you can isolate the cause of defects, then move on to combinatorial tests

d) attempt to test every possible combination of inputs

e) you should do all four of the above

[pic]

Performing Tests

Top of Form

[pic]

Maintainability tests should be performed by...

a) The application developer

b) independent groups

c) QA

d) all of the above[pic]

[pic]

Root cause of a defect should be recorded because...

a) It helps guide where more testing needs to be done

b) it helps QA analyze which processes need to be improved

c) it helps management know where bugs are coming from

d) all of the above[pic]

[pic]

At Testing phase entry and exit, audits should be performed to...

a) Ensure that testing complies with the organization's policies, procedures, standards, and guidelines

b) ensure that all test cases have been executed

c) ensure that all defects have been fixed

d) b and c only[pic]

[pic]

Integration testing should begin...

a) Once two or more components have been unit tested and critical defects fixed

b) once all components have been unit tested and critical defects fixed

c) once all components have been unit tested

d) once two or more components have been unit tested[pic]

[pic]

Testing "handshakes" and other communication points from system to system is part of...

a) Coupling tests

b) system tests

c) user acceptance tests

d) all of the above[pic]

[pic]

System testing should begin...

a) Once the minimal components have been integration tested

b) when the environment is ready

c) when test data is conditioned

d) all three of the above must be complete[pic]

[pic]

When reporting a defect, which of these should be included?

a) Expected versus actual results

b) potential causes

c) which pass/fail criteria failed

d) all of the above[pic]

[pic]

Negative tests to trigger data validation rules during file processing (to make sure they are kicked out) are an example of _______ testing

a) Error guessing

b) file integrity

c) authorization

d) equivalence partitioning[pic]

What is a good way to tell it's time to stop system testing?

a) When the deadline is reached

b) when all testing hours are used up

c) when all scripts have been executed and all bugs have been fixed

d) when metrics such as MTTF, test coverage, and defect counts by severity reach acceptable levels[pic]

[pic]

Which of these is not a typical way to test for the Reliability test factor?

a) Regression Testing

b) Compliance Testing

c) Functional Testing

d) Manual testing[pic]

[pic]

When documenting the actual result of a test in a bug log, it is necessary to record the specific inputs and outputs used.

a) true

b) false[pic]

[pic]

Which of these is NOT a significant concern surrounding the test phase?

a) Significant problems not uncovered during testing

b) too many meetings

c) inadequate time/resources - especially if project is late into testing

d) Software not ready for testing (too many showstopper bugs preventing test execution)[pic]

[pic]

Trying to delete or overwrite controlled files, to make sure the files cannot be deleted or overwritten, are examples of what kind of test?

a) Functional tests

b) Error guessing

c) File integrity tests

d) Maintainability tests[pic]

[pic]

When reporting a defect, it is especially important to document the effects of the bug, in terms of severity, because...

a) This will guide how much attention the defect gets, and when it will be fixed

b) only severe bugs will get fixed

c) it keeps the programmers in their place

d) none of the above

[pic]

Defect Tracking & Correction

Top of Form

[pic]

The main steps in defect resolution are Prioritization, Scheduling, Fixing, Reporting

a) True

b) False[pic]

[pic]

The scheduling of fixes should be based primarily on...

a) Defect counts

b) the overall project schedule

c) defect priority[pic]

[pic]

Fixing a defect involves

a) The developers making and unit testing the fix

b) QA retesting the fix and validating it

c) regression testing surrounding the fix

d) QA reviewing scripts, test data, etc to see if updates need to be made

e) all of the above[pic]

[pic]

Which of the following are ways to use the defect log to drive process improvement?

a) Go back to the process which originated the defect and find out what caused it

go back to the validation steps in that phase and find out why it wasn't caught earlier

b) think about what other kinds of things might have been missed, given that this bug "snuck through"

c) look at the fix history of the bug to find out if it could have been handled more

d) all of the above[pic]

[pic]

The minimum requirements for a defect management process include all but which of these?

a) Defect to risk mapping

b) defect prevention measures

c) defect discovery, naming, and resolution

d) process improvement

e) deliverable baselining[pic]

[pic]

Reducing the impact of a risk, should it occur, is an effective way to minimize risk.

a) True

b) False[pic]

[pic]

Who should be involved in prioritizing fixes for valid defects found during system testing?

a) QA

b) the users

c) the project team

d) Developers[pic]

Which of the following techniques is not a common way to minimize expected risks?

a) Training and Education (both the workforce and the customers)

b) ranking risks using Pareto voting

c) Defensive Code

d) Defensive Design

e) Quality Assurance, Methodology and Standards[pic]

[pic]

Which is cheaper, preventing defects or fixing them?

a) Preventing

b) fixing[pic]

[pic]

Deliverable baselining is:

a) baselining at the end of each phase and keeping change control processes in place

b) not counting defects caught and fixed before your phase.

c) identifying key variables and defining standards for each

d) A and C only

e) A, B, and C[pic]

[pic]

Which of these is not a "best practice" for defect managment?

a) Automate the capture and analysis of information wherever possible

b) strive to correct every reported defect before release

c) make defect management a risk-driven process

d) prevent defects or at least contain them within the phase they were discovered

e) use defect measurement to improve development[pic]

[pic]

Which of these is not a valid purpose for reporting defects?

a) to correct the defect

b) to report the status of the application

c) to gather statistics to help develop expectations on future projects

d) to improve the development process

e) to show test coverage[pic]

[pic]

A defect is "discovered" when

a) Development acknowledges a reported defect

b) none of the above

c) a tester observes a failure

d) development begins to work a reported defect

e) a tester reports a defect[pic]

[pic]

ALE stands for

a) Annual Loss Expectation - a formula for judging risk

b) Acceptance Level Error - an error that was not caught until user acceptance testing

c) Audit Legal Effort - the amount of time billed to the project by the legal and audit staff

d) Assertion Loop Estimation - a measure of code complexity[pic]

[pic]

Which of these refers to the "cost" of the risk?

a) Priority

b) severity[pic]

When reporting a defect it is important to...

a) Make sure it is reproducible, if possible

b) get it to the developer's attention promptly

c) attempt to analyze the cause of hte defect

d) B and C only

e) A, B, and C[pic]

[pic]

To estimate a risk's expected impact, you should...

a) Estimate the expected frequency of the event and the amount of loss that would happen each time

b) try to put a dollar amount on it

c) confer with experts on the probability and severity of the risk

d) a and b only

e) all of the above[pic]

[pic]

Along with a short title for the defect and what phase it was found in, what other data should be part of the defect's "name"

a) Priority, like Urgent, High, Medium, Low

b) severity, like Critical, High, Medium, Low

c) steps to reproduce the error

d) data used to reproduce the error

e) categorization, like Missing, Inaccurate, Incomplete, Inconsistent

[pic]

Test Techniques

The testing method that uses statistical techniques to find out how faults in the program affect it's failure rate is:

a) cyclomatic complexity

b) fault estimation

c) error guessing

d) statistical testing[pic]

[pic]

Faults that cause an input to be associated with the wrong path domain are called

a) Process flow inconsistencies

b) domain faults

c) computation faults

d) path errors[pic]

[pic]

If you know that a developer tends to have extra errors in date-processing code, and decide to test dates harder than usual as a result, you are doing:

a) Risk based testing

b) static testing

c) error based testing

d) black box testing[pic]

[pic]

Which of these is NOT an example of stress testing?

a) Entering transactions to determine that sufficient disk space has been allocated to the application

b) Ensuring that the communication capacity is sufficient to handle the volume of work by attempting to overload the network with transactions

c) Inducing a failure on one of the systems such that the program terminates.

d) Testing system overflow conditions by entering more transactions than can be accommodated by tables, queues, internal storage facilities, and so on[pic]

[pic]

Fault seeding is instrumentation designed to

a) Populate an audit trail

b) estimate the number of bugs left in the code by measuring how many "bait" c) errors have not been found.

c) Measure how many branches have been tested

d) test error handling code[pic]

[pic]

_______ attempts to decide what constitutes a sufficient set of paths to test.

a) boolean analysis

b) cyclomatic complexity

c) fault based testing

d) perturbation testing[pic]

[pic]

Testing to determine whether the system can meet the specific performance criteria is referred to as:

a) Compliance testing

b) Stress testing

c) Execution testing

d) Criteria based testing[pic]

Which type of testing gives you the best coverage?

a) Expression testing

b) branch testing

c) statement testing

d) condition testing[pic]

[pic]

Tests to determine the ability of the application to properly process incorrect transactions are:

a) Error handling testing

b) failure testing

c) compliance testing

d) security testing[pic]

[pic]

Crashing a server to ensure that backup data and processes are adequate is an example of:

a) Operations testing

b) Intersystem testing

c) Error Handling testing

d) Recovery testing[pic]

[pic]

Syntax testing evaluates the program's ability to handle:

a) Data outside the normal range

b) process flows

c) incorrectly formatted data

d) report generation[pic]

[pic]

Which of these is an objective of Operations testing?

a) Executing each function in the Requirements document

b) Making sure that the system can interact with other related systems

c) determining that user documentation has been prepared and documented

d) crashing a server to test recovery procedures[pic]

[pic]

Cost benefits analysis is particularly important during _________ testing, otherwise large amounts of effort can be expended with minimal payback.

a) Requirements

b) unit

c) regression

d) functional[pic]

[pic]

Code reviews and inspections to make sure programming standards are followed is known as:

a) Unit Testing

b) Desk Check

c) Compliance testing

d) Walkthrough[pic]

[pic]

Which of these are all types of structural testing?

a) Fault estimation, domain testing, regression testing, condition testing

b) regression testing, condition testing, manual support testing

c) statement testing, branch testing, conditional testing, expression testing, path testing

d) expression testing, path testing, control testing, security testing[pic]

Which of these is a main objective of Security Testing?

a) Determining that application processing complies with the organization's policies and procedures

b) Determining that system test data and test conditions remain current.

c) Conducting redundant processing to ensure that the new version of hte aplication performs correctly

d) determining that a realistic definition and enforcement of access to the system has been implemented[pic]

[pic]

When testing a grade calculation system, a tester determines that all scores from 90 to 100 will yield a grade of A, but scores below 90 will not. This analysis is known as:

a) Equivalence partitioning

b) special value testing

c) axiomatic analysis

d) none of the above[pic]

[pic]

Which of these are examples of Requirements testing?

a) Creating a test matrix to prove that the documented requirements match what the user asked for

b) using a checklist prepared specifically for the application to verify the app's compliance to organizational policies and governmental regulations

c) Determining that the system meets the audit requirements

d) All of the above[pic]

[pic]

A concise method of representing equivalence partitioning is:

a) a state machine

b) a complete data set

c) not possible

d) a decision table[pic]

[pic]

Local extent, finite breadth and global extent, infinite breadth are two types of:

a) Static analysis

b) fault based testing

c) test results

d) statistical tests[pic]

[pic]

Complexity measures, data flow analysis, and symbolic execution are all static testing methods known as:

a) Structural testing

b) structural analysis

c) branch testing

d) desk checks[pic]

[pic]

Input Domain testing uses

a) Test data that covers the extremes of each input domain

b) test data that covers the midrange of each input domain

c) boundary analysis and equivalence partitioning

d) all of the above

A technique that produces a finite class of faults, but will crash the whole program, is known as:

a) local extent, finite breadth

b) global extent, infinite breadth

c) local extent, infinite breadth

d) Global extent, Finite breadth[pic]

[pic]

Selecting test data on the basis of features of the function to be computed is called

a) Axiomatic analysis

b) error guessing

c) branch testing

d) special value testing[pic]

[pic]

Coming up with tasks for the end user to do, and then watching them do it for the purposes of making sure their procedures are correct is:

a) Regression testing

b) intersystem testing

c) functional testing

d) manual support testing[pic]

[pic]

When is it appropriate to use intersystem testing?

a) Coding phase

b) testing phase

c) implementation phase

d) all of the above[pic]

[pic]

Output domain coverage is

a) a type of equivalence partitioning that ensures that all possible classes of outputs have been generated by the tests

b) ensuring that the output is readable

c) ensuring that the output is stored in the appropriate domain

d) none of the above[pic]

[pic]

Which of the following can be used as test oracles?

a) An executable spec

b) an older version of the program

c) independently generated spreadsheets and prediction models

d) all of the above[pic]

[pic]

Control testing is a broader term that includes

a) Error handling

b) functional analysis

c) axiomatic testing

d) all of the above[pic]

[pic]

Functional analysis is a ______ testing technique, whereas functional testing is a _________ testing technique.

a) Static, dynamic

b) unit, integration

c) dynamic, static

d) integration, unit[pic]

Test stubs and harnesses are most often used during:

a) Unit testing

b) integration testing

c) system testing

d) all of the above[pic]

[pic]

Comparing the old version of a program against a new one under test, in order to determine that the new system produces the correct results, is:

a) Parallel testing

b) regression testing

c) intersystem testing

d) state machines technique[pic]

[pic]

All of the following are specification techniques EXCEPT

a) Algebraic

b) tractability matrix

c) axiomatic

d) decision tables[pic]

[pic]

All of the following might be done during unit testing EXCEPT:

a) Desk check

b) manual support testing

c) walkthrough

d) compiler based testing

-----------------------

Define Exception Handling

Activate Exception Handling

Define Exception

Define Handler Function

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download