#rant that none asked for.
I came across this whitepaper on testing micro services. I clicked on this link because I am writing microservices now but I am fairly new to microservices.
I did a mistake that I have repeated 1593 times in the past. The said link was a resource by one of the big IT service providers.
While the work indeed is commendable - automating 4000 test cases is no joke, the content and approach for highlighting the “best practice” leave much to be desired.
The paper highlights practices that have been assumed to be best practices for a long time - and provides absolutely no information other than to say “we did so and so”. “Extreme automation” is the buzzword of the day, and as we, of the corporate types, agree - it doesn’t get better than that.
I mean - yes, I understand this is a mere case study. I don’t expect every case study or paper to contain all the real “stuff” all the time, but following would have been welcome -
- How does the accelerator fare as compared to commercial tools today?
- Savings in license costs by avoiding commercial tools (typically clients don’t pay or think they don’t pay for “accelerators”)
- How well did the accelerator write tests? What was the coverage advantage vis-a-vis other tools
- How did developers breathe easy by using this tool as compared to say writing tests in Postman (Pro), or using other open source and free tools that are popular today
- By how much did the framework improve efficiency in generating data?
- How was a high degree of overlap achieved in smart sharing of regression/system test cases, while keeping effort and test duration minimal
Take note that I have still not outlined just how I am doing the “thing”. That may be part of the IP, or reflect my “scarcity mindset”. But, readers only get excited about the work when they know more about the advantages of tools and practices employed in the project.
A more complete paper would probably also cover -
- Employing live vs. test data, and how the framework can help in specific cases. How we found advantage in one or both?
- How performance tests play a greater role today? How can they be integrated in the overall cycle? How are they automated using existing toolsets?
- How well can we mock or virtualise services more efficiently given a specific set of constraints?
It’s not all bad
Don’t get me wrong. I can understand where the above content is coming from. Marketing pressures, typical development processes not covering best practices, and lack of resources to think about content strategy through and through - they are all real.
And.. I have been there multiple times. I have been guilty of providing such content, case studies and “white papers” in the past (and probably churn that out again?) - content that lacks substance for so called “expert readers” but content nevertheless. I hope the servers hosting them will burn to ground so that I can lecture on best practices of writing good content. Oh wait, I hope it does not include this website too.
Fortunately, good content is everywhere today. Want to automate your API testing and write something about it? I would say start from these posts -
- know what API testing is about - https://martinfowler.com/articles/microservice-testing/#agenda
- free and open source tools - Integrations of swagger with open source tools including SwaggerAssertions, swagger-test
- how a popular tool aids writing your tests - look up Postman Pro
- how a seemingly successful product helps test automation - https://assertible.com/blog/testing-an-api-using-swagger