feat: random walks and embeddings#752
Conversation
|
Work is in progress actually. I'm still thinking how to implement RW in a best way, what should be in abstraction, what should be in impls. What configurations should be provided, etc. Current idea:
I think it should allow to generate really long way with a quite limited resources... Not sure about performance. |
|
@SemyonSinchenko How could this work? Like a Pregel thing? Maye nodes carry their paths with them? I've long been confused here... joins don't seem to scale. I'm not sure but would love to do so... can you explain? I mean I have read other implementations like the one I shared in the other thread: https://github.com/data61/stellar-random-walk |
Change my mind, but pure second oreder RW is not scalable. Just to understand, imagine two node with 1000 degree (common case in power law graphs). You need to collect two sets of neighborhouds of size 1000. It scales even worser than GF triangleCount that is suffering from the same problem. |
|
Yes, it is doing joins. Joins are slow but they are scalable. I see no other options tbh. To avoid huge neighborhoods, Im using a limit: at each batch take only part of vertex neighbors. |
|
I'm afraid I'm not knowledgeable enough to give a strong opinion here, but after a short chat with ChatGPT I wondered: what about integrating an existing system like ThunderRW with GraphFrames? |
|
I'm starting to think that we do not need RWs at all. @SauronShepherd there is no problem to write it from scratch in Spark. My question to you was mostly about reservoir sampling aggregation function if you are interested to implement it. |
|
Various embedding algorithms require random walks and I've seen implementations out there, but maybe they're not top priority? |
This PR is a WIP implementation, so if you have any suggestions or comments feel free. Im trying to make it scalable and from the first look it is. |
- Added Scala-style docstrings to all classes, traits, methods, and fields - Improved documentation for random walk algorithms and configurations
- Correct element_at index from 0 to 1 for 1-based Spark SQL arrays - Fix walk array construction by appending nextNode instead of currVisitingVertex - Add null handling for nodes with no outgoing neighbors in restart logic - Add comprehensive Scala docstrings to RandomWalkBase and RandomWalkWithRestart - Create RWExample.scala demonstrating RandomWalkWithRestart on LDBC datasets ...
This commit introduces a new Word2Vec-based embedding method using the hashing trick to handle large vocabularies efficiently in graph frames, particularly for random walk sequences. It includes configurable parameters like number of hashing functions, max features, and standard W2V settings, with comprehensive Scaladoc for public APIs. - Added core/src/main/scala/org/graphframes/embeddings/Word2VecHashingTrick.scala: New class implementing hashing trick by applying multiple Murmur3 hash functions and modulo to map features to a fixed-size space, reducing collisions and memory usage. It trains a W2V model on expanded sequences and provides a companion model class for vector retrieval via averaging hashed embeddings. Setters include docstrings explaining trade-offs (e.g., more hashes improve quality but multiply dataset size). - Modified core/src/main/scala/org/graphframes/examples/RWExample.scala: Updated main method to accept a single file path argument for edge loading instead of downloading LDBC datasets, simplifying usage for local files. Replaced vertex loading with direct derivation from edges for consistency and reduced I/O. - Modified core/src/main/scala/org/graphframes/exceptions.scala: Added GraphFramesW2VException class to handle W2V-specific errors, such as unsupported input types in hashing.
Replace collect_set + shuffle + slice with ReservoirSamplingAgg UDAF for efficient sampling of up to maxNbrs neighbors per vertex. This improves performance by avoiding full neighbor list aggregation and shuffling, especially beneficial for high-degree vertices. - Add ReservoirSamplingAgg trait: generic aggregator using reservoir sampling algorithm, supporting merge operations for distributed computation. - Handle various vertex ID types (String, Short, Byte, Int, Long) with appropriate encoders. - Raise GraphFramesUnsupportedVertexTypeException for unsupported types. - Add comprehensive test suite covering reduce, merge, and finish operations with edge cases and fixed seeds for determinism. Modified files: - .gitignore: Ignore Emacs temp files for cleaner diffs. - core/src/main/scala/org/graphframes/exceptions.scala: New exception class. - core/src/main/scala/org/graphframes/rw/RandomWalkBase.scala: Integrate ReservoirSamplingAgg in prepareGraph method. New files: - core/src/main/scala/org/apache/spark/sql/graphframes/expressions/ReservoirSamplingAgg.scala - core/src/test/scala/org/apache/spark/sql/graphframes/expressions/ReservoirSamplingAggSuite.scala
delete wrong implementation of w2v + hashing
- replace Reservoir sampling by KMinSampling - add L2norm to Hash2vec - add an optional convolution step to RW embeddings - small updates and performance fixes
|
Mostly the latest changes are related to the performance of the |
…rting from non-first batch
…g with same walkID
… update instance method
…bles and sections
|
@james-willis Hi! Thanks for the review. I addressed all of ur comments, could you take another look? |
|
Looking at the KMinSampling implementation, I have a suggestion for future optimization: Performance Enhancement: Bloom Filters for High-Degree VerticesThe current KMinSampling approach works well for most cases, but could benefit from bloom filters when dealing with very high-degree vertices (e.g., >10k neighbors). For vertices with extremely large neighbor sets, a bloom filter could:
This could be implemented as an optional optimization in a follow-up PR, perhaps triggered automatically when exceeds a threshold or when degree distribution analysis indicates the presence of super-nodes. The current implementation is solid and this would be a nice-to-have enhancement for very large graphs. Comment by Claude (AI Assistant) |
I don't understand what does it mean tbh. The KMinSampling is here exactly to avoid the super-nodes problem:
Where, how and why should I put bloom-filters here? We need samples, not the probabilistic strcuture to check does set contain element... As well to make a bloom filter you would need another aggregations similar to this KMin. Could you clarify please? |
james-willis
left a comment
There was a problem hiding this comment.
LGTM.
Please ignore the bloom filter question. I agree it doesn't make sense with the sieve filter (honestly can't figure out what a bloom filter could be used for here).
What changes were proposed in this pull request?
Why are the changes needed?
Close #726
Close #324