The End of Supply Chain Theory?
It is among the common research practices in our field to build a statistical model with a limited set of variables in order to take the lens of a theory – often being alien to our field – on a supply chain phenomenon, and to test this model based on maybe 200 datasets. Other researchers collect data from three or four case companies to build or extend a research model that comprises a small set of propositions. So far so good. “So far so outdated”, I should say if I were to be malicious. Why? Researchers in fields like supply chain management might soon (or already?) be competing with “companies like Google, which have grown up in an era of massively abundant data, [that] don’t have to settle for wrong models”, as the editor in chief of Wired put it already back in 2008, proclaiming The End of Theory. So, is the data deluge about to make our research obsolete? If so, how should our community adapt to this new reality?
Interesting scope. Some studies are already adopting big data as a research method, where new insights are to be found in the data deluge. Though, I would argue that big data as a research method also has its limitations (e.g. studying RQ’s in a specific context). Moreover, big data methods does not allow the conceptualization and development of theories. So the data deluge does not take over for qualitative studies.