Sehr interessanter Ansatz zur Auswahl von Analytics-Projekten.
Here’s what project selection looks like in a firm with an excellent data strategy: First, the company collects ideas. This effort should be spread as broadly as possible across the organization, at all levels. If you only see good and obvious ideas on your list, worry — that’s a sign that you are missing out on creative thinking. Once you have a large list, filter by the technical plausibility of an idea. Then, create the scatterplot described above, which evaluates each project on its relative cost/complexity and value to the business.
Now it gets interesting. On your scatterplot, draw lines between potentially related projects. These connections exist where projects share data resources; or where one project may enable data collection helpful to another project; or where foundational work on one project is also foundational work on another. This approach acknowledges the realities of working on such projects, like the fact that building a precursor project makes successor projects faster and easier (even if the precursor fails). The costs of gathering data and building shared components are amortized across projects.
This approach makes higher-value projects — those that would perhaps have seemed too ambitious — look less like an aggressive, expensive push forward. Instead, it reveals that such projects may indeed be more efficient and safer to proceed with than other lower-value projects that looked attractive in a naive analysis.
Put differently, an excellent data strategy acknowledges that projects play off of one another, and that the costs of projects change over time in light of other projects undertaken (and new technology, as well). This allows more accurate planning and may expand the organization’s capabilities more than expected. You can revisit this planning process quarterly, which is in line with how quickly machine learning technologies are changing.