Friday, June 1, 2018

The Future of Automated Testing

To gather insights for the current and future state of automated testing, we asked 31 executives from 27 companies, "What’s the future of automated testing from your point of view - where do the greatest opportunities lie?" Here's what they told us:

AI/ML

  • 1) Distributed systems traceability as part of the testing cycle. How does the change in one component affect the behavior of other things? See this in cloud-native architecture. Quality of the code versus the quality of the network. Measurability of system behavior. 2) Systems that can leverage ML since there are more things to keep up with than a human can see. Compare to previous behavior and other known systems. 
  • Testing is designed for AI/ML to build predictable models from patterns. An AI bot creates log-in tests and then goes throughout the SDLC. Today CI/CD and streamlining long-term, this will be very commonplace. Testing will be as natural as writing code which will be done by machines. Developers will be working on architecture, user interface designs and interactions, and writing code that works. The focus will be on creating experiences versus outcomes. Software engineers today are focused on building a drill. In the future, they’ll be focused on hanging the painting. 
  • Quality will become part of development. The only way to move faster is to build quality into how we develop. Companies who have already figured it out are doing it. There is still a vacuum about how the process is managed. Tools and vendors are aging and exiting. The analytical capability of everything that’s happening and enabling to ability make quick, well-informed decisions. AI/ML are part of the solution as teams generate more data. Companies are struggling with managing large amounts of data. Another trajectory type of tech being automated. IoT is a new space with rudimentary DevOps processes. Not enough standardization in technology to fully-automate everything. This will transform over the next few years.
  • Only areas we’ve moved are functional and unit. So much non-functional – performance testing, security testing, acceptability testing with more litigation happening to comply with acceptability regulations. Solutions coming to market addressing non-functional areas. Excited about conversational interfaces (NLP), but that is further out. Manual testing initially but automated as they catch on. 
  • The greatest opportunity for automated security testing is machine learning. Security teams that are able to leverage their historical vulnerability data to train machine learning models and will be able to automate the vulnerability verification process, thereby providing developers accurate vulnerability data in near real-time. 
  • Expand automation beyond test execution with AI/ML. Thousands of tests per day automatically generated and dynamic. Bringing users into testing with monitoring, GA, usability, Optimizely. 
  • To make it not like a stepbrother after you completed development. Something that’s embedded in every step of development and delivery so it’s part of the overall process. Seamless interfaces in the form of SDKs. Stacking to development to unit testing, the end-to-end testing, to deployment. Saves time and money while improving quality. Visual AI is key because that’s a common language everyone understands and can relate to. 
  • Using AI to run every test imaginable in the least amount of time to ensure your code is always vulnerability free. 
  • In the area of machine learning. Go from scenario from data analyst to updating model to model that updates without coding skills. Very specific machine learning targeted instances. More focused ML function to look at network scenarios. We’re two years away. Rather than network validation, we'll be able to see how a system adapts to new patterns. 
  • Smart reporting, including smart analysis, has already begun by incorporating machine learning into the automation test results. So, instead of just conveying what went well and what did not, the report can analyze and then describe the hows and whys, based on previous occurrences. Maybe the bugs are only in the QA environment, for example, and not in production. This will continue to progress in the near future. Also, with machine learning, automated testing can determine which build is the best one to release in your production environment. We already see some companies using these methods by having their builds compared automatically using AI/machine learning - where there’s a scoring system set up, and the “winner,” i.e. the best build according to the automated tests, is the one deployed. We believe that this will become more mainstream moving into the future. 
  • Models get smarter with more data. Hybrid with humans understanding business context. AS ML learns the nuances of businesses, applications will be able to anticipate and solve problems. Need to point tools in the right direction. 
  • 1) Software testing matters more today than in the history of development. Software failure = business failure. 2) The notion of AI has invigorated the industry to be more creative in approach and using the data exhaust that is produced to make testing easier and more automated. 
  • Testing requires some members of the team to have competency with testing in their toolchain. Tests will become easier to use. Tools that use human heuristics to automatically test the application like a person would. New design patterns for testing as ML evolves. More efficient and effective test at a lower cost. Low code reduces the barrier to entry.

Self-Healing

  • 1) Tools are consumable and integrate with the wider ecosystem. Avoid vendor lock-in. 2) Intelligent automation with creation and maintenance of tests. Self-healing tests. 3) Automation testing for all personas within the organization. 
  • 1) Continuous testing as a service. 2) With AI/ML more predominant testing will be a fast mover in how to leverage insights from testing. Results in self-healing, self-remediation. 
  • Self-healing. Automation can be fragile and break as changes are made to the code or the environment. Organizations do not fully realize the ROI of automated testing the leaders expected because of the fragility of the tests. We will be able to leverage AI/ML to self-heal automated tests that used to work without an engineer having to come in and make changes. See tests that have run successfully and see the pattern versus when it fails.

Other

  • As companies go from monoliths to microservices they are able to isolate parts of the system, so they can test in absence of everyone else.



    from DZone.com Feed https://ift.tt/2LLx0sR

No comments:

Post a Comment