igraph-help
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[igraph] 'decompose.graph' versus 'clusters'


From: David Hunkins
Subject: [igraph] 'decompose.graph' versus 'clusters'
Date: Fri, 18 Jul 2008 13:31:35 -0700

Hi, I'm working on a large disconnected graph (5M vertices, 10M edges, 500k clusters). I'm trying to reduce the time it takes to compute betweenness for each vertex by breaking the graph up into connected components. Decompose.graph does this in a very convenient way, since it returns graph objects that I can run betweenness on:

comps <- decompose.graph(g10k)
for (i in 1:length(comps)){
write(rbind(V(comps[[i]])$id,betweenness(comps[[i]])),file="outfile", nc=2, sep=",", append=TRUE)
}

However decompose.graph is very slow compared with clusters, which appears to do the same thing in almost no time. (I can compute no.clusters on my graph in a few seconds, whereas decompose.graph, run on the same graph, does not finish in 24 hours.) The docs for the C functions indicate that  'clusters' and 'decompose.graph' both have O(V + E) time complexity, but I have not found this to be true. 

It appears that others have selected 'clusters' to partition large graphs:

http://lists.gnu.org/archive/html/igraph-help/2007-12/msg00046.html

Does anybody have some R 'glue code' that makes clusters return a list of graphs like decompose.graph does? (I'm an R newbie.) Or other suggestions / clarifications?

Thanks,

Dave

David Hunkins
im: davehunkins




reply via email to

[Prev in Thread] Current Thread [Next in Thread]