Skip to content

Possible solution to large data sets? #46

@riversdark

Description

@riversdark

This has already been mentioned in another issue, but I thought it merits its own thread, and besides the other one hasn't been updated for quite a while.

Since the algorithm complexity is $O(n^3)$, the computation can get quite costly quite quickly, both time-wise and memory-wise. Are there any solutions or approximation algorithms to address this problem?

In the other thread a data size of 55K was already considered a big data set, and I have a data set that is thousands of times its size... Any hope?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions