To continue our discussion on the different forms of the software supply chain, this month we look at the best-practices for working with third-party software suppliers. (See: How to Work Better With the Open-Source Community.)
The largely successful philosophy of why-build-when-you-can-buy has inspired Original Equipment Manufacturers (OEMs) building software and systems to buy software components from third-party providers. Every software module within the system, regardless of its source, is an integral part of the OEM brand. Hence, it is necessary that every piece measures up and is tested.
When it comes to testing, there are some idiosyncrasies specific to the software supply chain. For example, each instance of a physical part is different and needs to be tested for flaws. Software rarely has flaws from copying, but defects in the code can cause integration difficulties and stability problems during testing, or, worse, after deployment to customers.
By using automated code testing, you can clearly demonstrate that you care about the quality of the software you provide. Here are a few processes that our customers have implemented:
- Put it in the contract:
- Expect a report indicating the quality of every software version received:
- Auditing mode:
Modern static analysis solutions provide vendors with a cost-effective, automated, and repeatable way to ensure the quality of software they create and ship. Because static testing produces results that are measureable, objective, and repeatable, OEMs can require it as a contractual agreement with a third-party software provider.
A high-level report of the testing effort and quality should be necessary with every drop of software received. A report indicating that all bugs and defects are fixed may be an unrealistic expectation. However, if a report indicates untested parts or many defects that have not been reviewed, it serves as a strong signal that quality is not up to par. Additionally, a report can provide an indication of quality compared to industry averages.
OEMs that purchase source code can reserve the right to analyze the supplier's code and report the results back. It could be implemented as part of the integration. This helps in multiple ways. Firstly, the OEM has a way to measure the quality of what is received using the same measuring stick that it uses internally, and secondly, by providing recommendations and results of the analysis back to the supplier, it gives the supplier an opportunity to fix the defects.
As with every aspect of the value chain, successful processes create value for both parties involved. For a company purchasing software components, these methods improve and support the brand by ensuring that externally sourced code is held to a high standard. For a supplier, it's an objective way to represent the quality of the product and strengthen the relationship with the customer.
These are just some of the lessons we've learned from working with our customers who are on both sides of the software supply chain. What are your thoughts on these testing solutions?
Has there been any consideration to somehow certify or qualify the level of skill of the suppliers' test personnel? It seems to me that the content of a test report is only as good as the test staff.
This is a valid point. The quality of the code is pretty much determined by the “efforts” or “intelligence” the tester puts in. Automated test code can be misleading because the customer might be expecting different usage or scenarios. Hence there needs to be a set of agreed upon test plans/ test cases/ test scripts to capture all the requirements to ensure they are met.
Yes – and this assumes some acumen on the part of the customer to ensure the correct and necessary skills and/or contractual obligations are in place. Perhaps there is a consulting businessas a liason here?
Yeah. indeed that's what most of the IT consulting companies like IBM/ HP Accenture/ Deloitte / get to make $$$$. The trend is that they would win the bid but then outsource it to India to perform the work
In my opinion, when some company is outsourcing the software development to a third party, it is obvious that it does not have enough expertise or resources to do the job on its own. In such a scenario it may not be able to validate the automated testing results also. The best way could be , for the outsourcing company to have a test suite to test the functional requirements as given to the outsourced party. This way it can be checked whether the software meets the required functionality with all possible variations . The outsourcing company will have the expertise to design such test cases which will be indepedent of the code development and testing
My only question is what measures or parameters are they using to analyze these results? And does the company have enough expertise in their staff to possibly come up with solutions if they find flaws? Is there a standard of measurement that is being used?
Well, generally speaking, number of tests to perform is strictly related to: LOC (lines code numbers) or FP (function points). Other parameters to take in consideration are related to sw app's usage: real time or embedded device. Recent coding methodology based on uml/webml criteria allow more functions to measure performance, anyway we would say right trade off in terms effor to spend, number of tests, features to test, way to perform them is the key role in charge of sw app testing outsourcer.
I'm wondering really who designs the Automated testing code.
If the third-party vendor does it, then there is really no way of guaranteeing anything.
However, if the client does it, then it means it is just as good as their ability to understand exactly how the software should work, without which they cannot really test anything, and from my experience, designing a testing application could be just as taking as designing the actual application itself, depending on the complexity.
Sometimes, i think outsourcing to third party software developers is not due only to inability to do the task, but its a way of saving time, especially for large firms like HP and the likes.
From my experience, I have always found it much effective to test the overall software against performance metrics rather than test the code individually. Testing of code may be done on preliminary levels, but overall testing should be done by evaluating the system on the performance measures.
It is absolutely true. Overall testing fits better the strategy to provide results in line with users expectation, code testing is closer to developing phases. We must also add there are several testing phases in sw development process then right choice it depends on contract signed with outsourcer and task you want to demand.
Tioluwa, it is a very good point. Based on my experience I can report sometimes “offshore” agreement is focused on basic testing, sometimes is focused on looking for outsourcers with expertise to conceive & testing design and then in performing them as well. Of course as matter of costs and benefits, it easy to understand we are speaking about contracts with responsibility and economics completley different to each other.
In some cases, static code analysis is required, such as automotive software.
This is good t.alex. Another way to analyze the topic is to consider costs in sw testing phases. In line with your opinions, Even tests have to be performed e2e, as possible tune or bug is discover in advance (i.e: static code stage), as costs to resolve and patch it will decrease.