As systems grow in complexity, coverage and accuracy of testing via manual systems becomes too labour intensive. Automation assists the team to achieve maintainability and flexibility of test suites, identifying issues quickly to ensure they are resolved at an early stage. Automation can be done repetitively and adds additional testing that would be difficult to cover through manual testing.
Software testing has changed drastically over the years. New methods and tools have been introduced that have helped in delivering high quality software products. However, there are terms that sometimes confused the developers, project managers and even testers. One of these points of confusion is definition of “testing” and “checking”. In this post, I will describe these two concepts, their differences and how it fits in a software development cycle..
It’s a natural process of software development for testers to live in a steady flow of bugs. Bugs can often cause developers and project managers to (figuratively) pull their hair out in frustration. For testers bugs can be exciting, interesting, fulfilling, and frustrating too. When a project comes across our desk for testing it can feel like the beginning of an Easter egg hunt. Bugs can be hidden in the most unexpected places and when discovered can give a ‘Eureka!’ moment for a tester. They can also range from very minor problems that most people would never notice, to severe errors that adversely affect business and/or technical requirements of the project. These can bring a feeling of dread and discouragement to a team, especially when schedules are tight. At times it helps to step back and recognise that even the biggest, most well established and resourced companies in technology like Microsoft, NASA, IBM, Intel and others have had monumentally bad bugs in their products. So let’s look at a couple of better-known software ‘Easter eggs’ from the past and present.