References
Graph problems
GenericTensorNetworks.solve
— Functionsolve(problem, property; usecuda=false, T=Float64)
Solving a certain property of a graph problem.
Positional Arguments
problem
is the graph problem with tensor network information,property
is string specifying the task. Using the maximum independent set problem as an example, it can be one ofPartitionFunction
()
for computing the partition function,SizeMax
(k=Single)
for finding maximum-$k$ set sizes,SizeMin
(k=Single)
for finding minimum-$k$ set sizes,CountingMax
(k=Single)
for counting configurations with maximum-$k$ sizes,CountingMin
(k=Single)
for counting configurations with minimum-$k$ sizes,CountingAll
()
for counting all configurations,PartitionFunction
()
for counting all configurations,GraphPolynomial
(; method=:finitefield, kwargs...)
for evaluating the graph polynomial,SingleConfigMax
(k=Single; bounded=false)
for finding one maximum-$k$ configuration for each size,SingleConfigMin
(k=Single; bounded=false)
for finding one minimum-$k$ configuration for each size,ConfigsMax
(k=Single; bounded=true, tree_storage=false)
for enumerating configurations with maximum-$k$ sizes,ConfigsMin
(k=Single; bounded=true, tree_storage=false)
for enumerating configurations with minimum-$k$ sizes,ConfigsAll
(; tree_storage=false)
for enumerating all configurations,
Keyword arguments
usecuda
is a switch to use CUDA (if possible), user need to call statementusing CUDA
before turning on this switch.T
is the "base" element type, sometimes can be used to reduce the memory cost.
GenericTensorNetworks.GenericTensorNetwork
— Typestruct GenericTensorNetwork{CFG, CT, LT}
GenericTensorNetwork(problem::GraphProblem; openvertices=(), fixedvertices=Dict(), optimizer=GreedyMethod())
The generic tensor network that generated from a GraphProblem
.
Positional arguments
problem
is the graph problem.code
is the tensor network contraction code.fixedvertices
is a dictionary specifying the fixed dimensions.
GenericTensorNetworks.GraphProblem
— TypeGraphProblem
The abstract base type of graph problems.
GenericTensorNetworks.IndependentSet
— Typestruct IndependentSet{WT} <: GraphProblem
The independent set problem in graph theory.
Positional arguments
graph
is the problem graph.weights
are associated with the vertices of thegraph
, default toUnitWeight()
.
Examples
julia> using GenericTensorNetworks, Graphs
julia> problem = IndependentSet(smallgraph(:petersen));
julia> net = GenericTensorNetwork(problem);
julia> solve(net, ConfigsMax())
0-dimensional Array{CountingTropical{Float64, ConfigEnumerator{10, 1, 1}}, 0}:
(4.0, {1010000011, 1001001100, 0100100110, 0101010001, 0010111000})ₜ
GenericTensorNetworks.MaximalIS
— Typestruct MaximalIS{WT<:Union{UnitWeight, Vector}} <: GraphProblem
The maximal independent set problem. In the constructor, weights
are the weights of vertices.
Positional arguments
graph
is the problem graph.weights
are associated with the vertices of thegraph
.
GenericTensorNetworks.Matching
— Typestruct Matching{WT<:Union{UnitWeight, Vector}} <: GraphProblem
The Vertex matching problem.
Positional arguments
graph
is the problem graph.weights
are associated with the edges of thegraph
.
GenericTensorNetworks.Coloring
— Typestruct Coloring{K, WT<:Union{UnitWeight, Vector}} <: GraphProblem
Coloring{K}(graph; weights=UnitWeight())
The Vertex Coloring problem.
Positional arguments
graph
is the problem graph.weights
are associated with the edges of thegraph
, default toUnitWeight()
.
GenericTensorNetworks.DominatingSet
— Typestruct DominatingSet{WT<:Union{UnitWeight, Vector}} <: GraphProblem
DominatingSet(graph; weights=UnitWeight())
The dominating set problem.
Positional arguments
graph
is the problem graph.weights
are associated with the vertices of thegraph
, default toUnitWeight()
.
GenericTensorNetworks.SpinGlass
— Typestruct SpinGlass{WT<:Union{UnitWeight, Vector}} <: GraphProblem
SpinGlass(n, cliques; weights=UnitWeight())
SpinGlass(graph::SimpleGraph, J=UnitWeight(), h=ZeroWeight())
The spin-glass problem.
Positional arguments
n
is the number of spins.cliques
is a vector of cliques, each being a vector of vertices (integers). For simple graph, it is a vector of edges.weights
are associated with the cliques.
GenericTensorNetworks.MaxCut
— Typestruct MaxCut{WT1<:Union{UnitWeight, Vector}, WT2<:Union{ZeroWeight, Vector}} <: GraphProblem
The cutting problem.
Positional arguments
graph
is the problem graph.edge_weights
are associated with the edges of thegraph
.vertex_weights
are associated with the vertices of thegraph
.
GenericTensorNetworks.PaintShop
— Typestruct PaintShop{LT} <: GraphProblem
The binary paint shop problem.
Positional arguments
sequence
is a vector of symbols, each symbol is associated with a color.isfirst
is a vector of boolean numbers, indicating whether the symbol is the first appearance in the sequence.
Examples
One can encode the paint shop problem abaccb
as the following
julia> syms = collect("abaccb");
julia> pb = GenericTensorNetwork(PaintShop(syms));
julia> solve(pb, SizeMin())[]
2.0ₜ
julia> solve(pb, ConfigsMin())[].c.data
2-element Vector{StaticBitVector{3, 1}}:
100
011
In our definition, we find the maximum number of unchanged color in this sequence, i.e. (n-1) - (minimum number of color changes) In the output of maximum configurations, the two configurations are defined on 5 bonds i.e. pairs of (i, i+1), 0
means color changed, while 1
means color not changed. If we denote two "colors" as r
and b
, then the optimal painting is rbbbrr
or brrrbb
, both change the colors twice.
GenericTensorNetworks.Satisfiability
— Typestruct Satisfiability{T, WT<:Union{UnitWeight, Vector}} <: GraphProblem
The satisfiability problem.
Positional arguments
cnf
is a conjunctive normal form (CNF
) for specifying the satisfiability problems.weights
are associated with clauses.
Examples
julia> @bools x y z a b c
julia> c1 = x ∨ ¬y
x ∨ ¬y
julia> c2 = c ∨ (¬a ∨ b)
c ∨ ¬a ∨ b
julia> c3 = (z ∨ ¬a) ∨ y
z ∨ ¬a ∨ y
julia> c4 = (c ∨ z) ∨ ¬b
c ∨ z ∨ ¬b
julia> cnf = (c1 ∧ c4) ∧ (c2 ∧ c3)
(x ∨ ¬y) ∧ (c ∨ z ∨ ¬b) ∧ (c ∨ ¬a ∨ b) ∧ (z ∨ ¬a ∨ y)
julia> gp = GenericTensorNetwork(Satisfiability(cnf));
julia> solve(gp, SizeMax())[]
4.0ₜ
GenericTensorNetworks.SetCovering
— Typestruct SetCovering{ET, WT<:Union{UnitWeight, Vector}} <: GraphProblem
The set covering problem.
Positional arguments
sets
is a vector of vectors, each set is associated with a weight specified inweights
.weights
are associated with sets.
Examples
julia> sets = [[1, 2, 5], [1, 3], [2, 4], [3, 6], [2, 3, 6]]; # each set is a vertex
julia> gp = GenericTensorNetwork(SetCovering(sets));
julia> res = solve(gp, ConfigsMin())[]
(3.0, {10110, 10101})ₜ
GenericTensorNetworks.SetPacking
— Typestruct SetPacking{ET, WT<:Union{UnitWeight, Vector}} <: GraphProblem
The set packing problem, a generalization of independent set problem to hypergraphs.
Positional arguments
sets
is a vector of vectors, each set is associated with a weight specified inweights
.weights
are associated with sets.
Examples
julia> sets = [[1, 2, 5], [1, 3], [2, 4], [3, 6], [2, 3, 6]]; # each set is a vertex
julia> gp = GenericTensorNetwork(SetPacking(sets));
julia> res = solve(gp, ConfigsMax())[]
(2.0, {00110, 10010, 01100})ₜ
GenericTensorNetworks.OpenPitMining
— Typestruct OpenPitMining{ET} <: GraphProblem
The open pit mining problem. This problem can be solved in polynomial time with the pseudoflow algorithm.
Positional arguments
rewards
is a matrix of rewards.blocks
are the locations of the blocks.
Example
julia> rewards = [-4 -7 -7 -17 -7 -26;
0 39 -7 -7 -4 0;
0 0 1 8 0 0;
0 0 0 0 0 0;
0 0 0 0 0 0;
0 0 0 0 0 0];
julia> gp = GenericTensorNetwork(OpenPitMining(rewards));
julia> res = solve(gp, SingleConfigMax())[]
(21.0, ConfigSampler{12, 1, 1}(111000100000))ₜ
julia> is_valid_mining(rewards, res.c.data)
true
julia> print_mining(rewards, res.c.data)
-4 -7 -7 -17 -7 -26
◼ 39 -7 -7 -4 ◼
◼ ◼ 1 8 ◼ ◼
◼ ◼ ◼ ◼ ◼ ◼
◼ ◼ ◼ ◼ ◼ ◼
◼ ◼ ◼ ◼ ◼ ◼
You will the the mining is printed as green in an colored REPL.
Graph Problem Interfaces
To subtype GraphProblem
, a new type must contain a code
field to represent the (optimized) tensor network. Interfaces GenericTensorNetworks.generate_tensors
, labels
, flavors
and get_weights
are required. nflavor
is optional.
GenericTensorNetworks.generate_tensors
— Functiongenerate_tensors(func, problem::GenericTensorNetwork)
Generate a vector of tensors as the inputs of the tensor network contraction code problem.code
. func
is a function to customize the tensors. func(symbol)
returns a vector of elements, the length of which is same as the number of flavors.
Example
The following code gives your the maximum independent set size
julia> using Graphs, GenericTensorNetworks
julia> gp = GenericTensorNetwork(IndependentSet(smallgraph(:petersen)));
julia> getixsv(gp.code)
25-element Vector{Vector{Int64}}:
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
⋮
[3, 8]
[4, 5]
[4, 9]
[5, 10]
[6, 8]
[6, 9]
[7, 9]
[7, 10]
[8, 10]
julia> gp.code(GenericTensorNetworks.generate_tensors(Tropical(1.0), gp)...)
0-dimensional Array{Tropical{Float64}, 0}:
4.0ₜ
GenericTensorNetworks.labels
— Functionlabels(problem::GraphProblem) -> Vector
labels(problem::GenericTensorNetwork) -> Vector
The labels of a graph problem is defined as the degrees of freedoms in the graph problem. e.g. for the maximum independent set problems, they are the indices of vertices: 1, 2, 3..., while for the max cut problem, they are the edges.
GenericTensorNetworks.energy_terms
— Functionenergy_terms(problem::GraphProblem) -> Vector
energy_terms(problem::GenericTensorNetwork) -> Vector
The energy terms of a graph problem is defined as the tensor labels that carrying local energies (or weights) in the graph problem.
GenericTensorNetworks.flavors
— Functionflavors(::Type{<:GraphProblem}) -> Vector
flavors(::Type{<:GenericTensorNetwork}) -> Vector
It returns a vector of integers as the flavors of a degree of freedom. Its size is the same as the degree of freedom on a single vertex/edge.
GenericTensorNetworks.get_weights
— Functionget_weights(problem::GraphProblem[, i::Int]) -> Vector
get_weights(problem::GenericTensorNetwork[, i::Int]) -> Vector
The weights for the problem
or the weights for the degree of freedom specified by the i
-th term if a second argument is provided. Weights are associated with energy_terms
in the graph problem. In graph polynomial, integer weights can be the exponents.
GenericTensorNetworks.chweights
— Functionchweights(problem::GraphProblem, weights) -> GraphProblem
chweights(problem::GenericTensorNetwork, weights) -> GenericTensorNetwork
Change the weights for the problem
and return a new problem instance. Weights are associated with energy_terms
in the graph problem. In graph polynomial, integer weights can be the exponents.
GenericTensorNetworks.nflavor
— Functionnflavor(::Type{<:GraphProblem}) -> Int
nflavor(::Type{<:GenericTensorNetwork}) -> Int
nflavor(::GT) where GT<:GraphProblem -> Int
nflavor(::GenericTensorNetwork{GT}) where GT<:GraphProblem -> Int
Bond size is equal to the number of flavors.
GenericTensorNetworks.fixedvertices
— Functionfixedvertices(tnet::GenericTensorNetwork) -> Dict
Returns the fixed vertices in the graph problem, which is a dictionary specifying the fixed dimensions.
Graph Problem Utilities
GenericTensorNetworks.is_independent_set
— Functionis_independent_set(g::SimpleGraph, config)
Return true if config
(a vector of boolean numbers as the mask of vertices) is an independent set of graph g
.
GenericTensorNetworks.is_maximal_independent_set
— Functionis_maximal_independent_set(g::SimpleGraph, config)
Return true if config
(a vector of boolean numbers as the mask of vertices) is a maximal independent set of graph g
.
GenericTensorNetworks.is_dominating_set
— Functionis_dominating_set(g::SimpleGraph, config)
Return true if config
(a vector of boolean numbers as the mask of vertices) is a dominating set of graph g
.
GenericTensorNetworks.is_vertex_coloring
— Functionis_vertex_coloring(graph::SimpleGraph, config)
Returns true if the coloring specified by config is a valid one, i.e. does not violate the contraints of vertices of an edges having different colors.
GenericTensorNetworks.is_matching
— Functionis_matching(graph::SimpleGraph, config)
Returns true if config
is a valid matching on graph
, and false
if a vertex is double matched. config
is a vector of boolean variables, which has one to one correspondence with edges(graph)
.
GenericTensorNetworks.is_set_covering
— Functionis_set_covering(sets::AbstractVector, config)
Return true if config
(a vector of boolean numbers as the mask of sets) is a set covering of sets
.
GenericTensorNetworks.is_set_packing
— Functionis_set_packing(sets::AbstractVector, config)
Return true if config
(a vector of boolean numbers as the mask of sets) is a set packing of sets
.
GenericTensorNetworks.cut_size
— Functioncut_size(g::SimpleGraph, config; edge_weights=UnitWeight(), vertex_weights=ZeroWeight())
Compute the cut size for the vertex configuration config
(an iterator).
GenericTensorNetworks.spinglass_energy
— Functionspinglass_energy(g::SimpleGraph, config; J, h=ZeroWeight())
spinglass_energy(cliques::AbstractVector{Vector{Int}}, config; weights=UnitWeight())
spinglass_energy(sg::SpinGlass, config)
Compute the spin glass state energy for the vertex configuration config
. In the configuration, the spin ↑ is mapped to configuration 0, while spin ↓ is mapped to configuration 1. Let $G=(V,E)$ be the input graph, the hamiltonian is
\[H = \sum_{ij \in E} J_{ij} s_i s_j + \sum_{i \in V} h_i s_i,\]
where $s_i \in \{-1, 1\}$ stands for spin ↓ and spin ↑.
In the hypergraph case, the hamiltonian is
\[H = \sum_{c \in C} w_c \prod_{i \in c} s_i,\]
where $C$ is the set of cliques, and $w_c$ is the weight of the clique $c$.
GenericTensorNetworks.num_paint_shop_color_switch
— Functionnum_paint_shop_color_switch(sequence::AbstractVector, coloring)
Returns the number of color switches.
GenericTensorNetworks.paint_shop_coloring_from_config
— Functionpaint_shop_coloring_from_config(p::PaintShop, config)
Returns a valid painting from the paint shop configuration (given by the configuration solvers). The config
is a sequence of 0 and 1, where 0 means painting the first appearence of a car in blue, 1 otherwise.
GenericTensorNetworks.mis_compactify!
— Functionmis_compactify!(tropicaltensor; potential=nothing)
Compactify tropical tensor for maximum independent set problem. It will eliminate some entries by setting them to zero, by the criteria that removing these entry does not change the MIS size of its parent graph (reference to be added).
Arguments
tropicaltensor::AbstractArray{T}
: the tropical tensor
Keyword arguments
potential=nothing
: the maximum possible MIS contribution from each open vertex
GenericTensorNetworks.CNF
— TypeCNF{T}
CNF(clauses)
Boolean expression in conjunctive normal form. clauses
is a vector of CNFClause
, if and only if all clauses are satisfied, this CNF is satisfied.
Example
julia> @bools x y z
julia> cnf = (x ∨ y) ∧ (¬y ∨ z)
(x ∨ y) ∧ (¬y ∨ z)
julia> satisfiable(cnf, Dict([:x=>true, :y=>false, :z=>true]))
true
julia> satisfiable(cnf, Dict([:x=>false, :y=>false, :z=>true]))
false
GenericTensorNetworks.CNFClause
— TypeCNFClause{T}
CNFClause(vars)
A clause in CNF
, its value is the logical or of vars
, where vars
is a vector of BoolVar
.
GenericTensorNetworks.BoolVar
— TypeBoolVar{T}
BoolVar(name, neg)
Boolean variable for constructing CNF clauses.
GenericTensorNetworks.satisfiable
— Functionsatisfiable(cnf::CNF, config::AbstractDict)
Returns true if an assignment of variables satisfies a CNF
.
GenericTensorNetworks.@bools
— Macro@bools(syms::Symbol...)
Create some boolean variables of type BoolVar
in current scope that can be used in create a CNF
.
Example
julia> @bools x y z
julia> (x ∨ y) ∧ (¬y ∨ z)
(x ∨ y) ∧ (¬y ∨ z)
GenericTensorNetworks.:∨
— FunctionGenericTensorNetworks.:¬
— Function¬(var::BoolVar)
Negation of a boolean variables of type BoolVar
.
GenericTensorNetworks.:∧
— FunctionGenericTensorNetworks.is_valid_mining
— Functionis_valid_mining(rewards::AbstractMatrix, config)
Return true if config
(a boolean mask for the feasible region) is a valid mining of rewards
.
GenericTensorNetworks.print_mining
— Functionprint_mining(rewards::AbstractMatrix, config)
Printing the mining solution in a colored REPL.
Properties
GenericTensorNetworks.PartitionFunction
— Typestruct PartitionFunction{T} <: GenericTensorNetworks.AbstractProperty
beta
Compute the partition function for the target problem.
- The corresponding tensor element type is
T
.
GenericTensorNetworks.SizeMax
— TypeSizeMax{K} <: AbstractProperty
SizeMax(k::Int)
The maximum-K set sizes. e.g. the largest size of the IndependentSet
problem is also know as the independence number.
- The corresponding tensor element type are max-plus tropical number
Tropical
ifK
isSingle
andExtendedTropical
ifK
is an integer. - It is compatible with weighted graph problems.
- BLAS (on CPU) and GPU are supported only if
K
isSingle
,
GenericTensorNetworks.SizeMin
— TypeSizeMin{K} <: AbstractProperty
SizeMin(k::Int)
The minimum-K set sizes. e.g. the smallest size ofthe MaximalIS
problem is also known as the independent domination number.
- The corresponding tensor element type are inverted max-plus tropical number
Tropical
ifK
isSingle
and invertedExtendedTropical
K
is an integer.
The inverted Tropical number emulates the min-plus tropical number.
- It is compatible with weighted graph problems.
- BLAS (on CPU) and GPU are supported only if
K
isSingle
,
GenericTensorNetworks.CountingAll
— TypeCountingAll <: AbstractProperty
CountingAll()
Counting the total number of sets. e.g. for the IndependentSet
problem, it counts the independent sets.
- The corresponding tensor element type is
Base.Real
. - The weights on graph does not have effect.
- BLAS (GPU and CPU) and GPU are supported,
GenericTensorNetworks.CountingMax
— TypeCountingMax{K} <: AbstractProperty
CountingMax(K=Single)
Counting the number of sets with largest-K size. e.g. for IndependentSet
problem, it counts independent sets of size $\alpha(G), \alpha(G)-1, \ldots, \alpha(G)-K+1$.
- The corresponding tensor element type is
CountingTropical
ifK
isSingle
, andTruncatedPoly
{K}
ifK
is an integer. - Weighted graph problems is only supported if
K
isSingle
. - GPU is supported,
GenericTensorNetworks.CountingMin
— TypeCountingMin{K} <: AbstractProperty
CountingMin(K=Single)
Counting the number of sets with smallest-K size.
- The corresponding tensor element type is inverted
CountingTropical
ifK
isSingle
, andTruncatedPoly
{K}
ifK
is an integer. - Weighted graph problems is only supported if
K
isSingle
. - GPU is supported,
GenericTensorNetworks.GraphPolynomial
— TypeGraphPolynomial{METHOD} <: AbstractProperty
GraphPolynomial(; method=:finitefield, kwargs...)
Compute the graph polynomial, e.g. for IndependentSet
problem, it is the independence polynomial. The METHOD
type parameter can be one of the following symbols
Method Argument
:finitefield
, uses finite field algebra to fit the polynomial.- The corresponding tensor element type is
Mods.Mod
, - It does not have round-off error,
- GPU is supported,
- It accepts keyword arguments
maxorder
(optional, e.g. the MIS size in theIndependentSet
problem).
- The corresponding tensor element type is
:polynomial
and:laurent
, use (Laurent) polynomial numbers to solve the polynomial directly.- The corresponding tensor element types are
Polynomial
andLaurentPolynomial
. - It might have small round-off error depending on the data type for storing the counting.
- It has memory overhead that linear to the graph size.
- The corresponding tensor element types are
:fft
, use fast fourier transformation to fit the polynomial.- The corresponding tensor element type is
Base.Complex
. - It has (controllable) round-off error.
- BLAS and GPU are supported.
- It accepts keyword arguments
maxorder
(optional) andr
, ifr > 1
, one has better precision for coefficients of large order, ifr < 1
, one has better precision for coefficients of small order.
- The corresponding tensor element type is
:fitting
, fit the polynomial directly.- The corresponding tensor element type is floating point numbers like
Base.Float64
. - It has round-off error.
- BLAS and GPU are supported, it is the fastest among all methods.
- The corresponding tensor element type is floating point numbers like
Graph polynomials are not defined for weighted graph problems.
GenericTensorNetworks.SingleConfigMax
— TypeSingleConfigMax{K, BOUNDED} <: AbstractProperty
SingleConfigMax(k::Int; bounded=false)
Finding single solution for largest-K sizes, e.g. for IndependentSet
problem, it is one of the maximum independent sets.
- The corresponding data type is
CountingTropical{Float64,<:ConfigSampler}
ifBOUNDED
isfalse
,Tropical
otherwise. - Weighted graph problems is supported.
- GPU is supported,
Keyword Arguments
bounded
, if it is true, use bounding trick (or boolean gradients) to reduce the working memory to store intermediate configurations.
GenericTensorNetworks.SingleConfigMin
— TypeSingleConfigMin{K, BOUNDED} <: AbstractProperty
SingleConfigMin(k::Int; bounded=false)
Finding single solution with smallest-K size.
- The corresponding data type is inverted
CountingTropical{Float64,<:ConfigSampler}
ifBOUNDED
isfalse
, invertedTropical
otherwise. - Weighted graph problems is supported.
- GPU is supported,
Keyword Arguments
bounded
, if it is true, use bounding trick (or boolean gradients) to reduce the working memory to store intermediate configurations.
GenericTensorNetworks.ConfigsAll
— TypeConfigsAll{TREESTORAGE} <:AbstractProperty
ConfigsAll(; tree_storage=false)
Find all valid configurations, e.g. for IndependentSet
problem, it is finding all independent sets.
- The corresponding data type is
ConfigEnumerator
. - Weights do not take effect.
Keyword Arguments
tree_storage
, if it is true, it uses more memory efficient tree-structure to store the configurations.
GenericTensorNetworks.ConfigsMax
— TypeConfigsMax{K, BOUNDED, TREESTORAGE} <:AbstractProperty
ConfigsMax(K=Single; bounded=true, tree_storage=true)
Find configurations with largest-K sizes, e.g. for IndependentSet
problem, it is finding all independent sets of sizes $\alpha(G), \alpha(G)-1, \ldots, \alpha(G)-K+1$.
- The corresponding data type is
CountingTropical
{Float64,<:ConfigEnumerator}
ifK
isSingle
andTruncatedPoly
{K,<:ConfigEnumerator}
ifK
is an integer. - Weighted graph problems is only supported if
K
isSingle
.
Keyword Arguments
bounded
, if it is true, use bounding trick (or boolean gradients) to reduce the working memory to store intermediate configurations.tree_storage
, if it is true, it uses more memory efficient tree-structure to store the configurations.
GenericTensorNetworks.ConfigsMin
— TypeConfigsMin{K, BOUNDED, TREESTORAGE} <:AbstractProperty
ConfigsMin(K=Single; bounded=true, tree_storage=false)
Find configurations with smallest-K sizes.
- The corresponding data type is inverted
CountingTropical
{Float64,<:ConfigEnumerator}
ifK
isSingle
and invertedTruncatedPoly
{K,<:ConfigEnumerator}
ifK
is an integer. - Weighted graph problems is only supported if
K
isSingle
.
Keyword Arguments
bounded
, if it is true, use bounding trick (or boolean gradients) to reduce the working memory to store intermediate configurations.tree_storage
, if it is true, it uses more memory efficient tree-structure to store the configurations.
Element Algebras
GenericTensorNetworks.is_commutative_semiring
— Functionis_commutative_semiring(a::T, b::T, c::T) where T
Check if elements a
, b
and c
satisfied the commutative semiring requirements.
\[\begin{align*} (a \oplus b) \oplus c = a \oplus (b \oplus c) & \hspace{5em}\triangleright\text{commutative monoid $\oplus$ with identity $\mathbb{0}$}\\ a \oplus \mathbb{0} = \mathbb{0} \oplus a = a &\\ a \oplus b = b \oplus a &\\ &\\ (a \odot b) \odot c = a \odot (b \odot c) & \hspace{5em}\triangleright \text{commutative monoid $\odot$ with identity $\mathbb{1}$}\\ a \odot \mathbb{1} = \mathbb{1} \odot a = a &\\ a \odot b = b \odot a &\\ &\\ a \odot (b\oplus c) = a\odot b \oplus a\odot c & \hspace{5em}\triangleright \text{left and right distributive}\\ (a\oplus b) \odot c = a\odot c \oplus b\odot c &\\ &\\ a \odot \mathbb{0} = \mathbb{0} \odot a = \mathbb{0} \end{align*}\]
TropicalNumbers.Tropical
— TypeTropicalMaxPlus{T} = Tropical{T} <: AbstractSemiring
TropicalMaxPlus is a semiring algebra, can be described by
- Tropical (TropicalMaxPlus), (ℝ, max, +, -Inf, 0).
It maps
+
tomax
in regular algebra,*
to+
in regular algebra,1
to0
in regular algebra,0
to-Inf
in regular algebra (for integer content types, this is chosen as a small integer).
Example
julia> TropicalMaxPlus(1.0) + TropicalMaxPlus(3.0)
3.0ₜ
julia> TropicalMaxPlus(1.0) * TropicalMaxPlus(3.0)
4.0ₜ
julia> one(TropicalMaxPlusF64)
0.0ₜ
julia> zero(TropicalMaxPlusF64)
-Infₜ
TropicalNumbers.CountingTropical
— TypeCountingTropical{T,CT} <: Number
Counting tropical number type is also a semiring algebra. It is tropical algebra with one extra field for counting, it is introduced in arXiv:2008.06888.
Example
julia> CountingTropical(1.0, 5.0) + CountingTropical(3.0, 2.0)
(3.0, 2.0)ₜ
julia> CountingTropical(1.0, 5.0) * CountingTropical(3.0, 2.0)
(4.0, 10.0)ₜ
julia> one(CountingTropicalF64)
(0.0, 1.0)ₜ
julia> zero(CountingTropicalF64)
(-Inf, 0.0)ₜ
GenericTensorNetworks.ExtendedTropical
— TypeExtendedTropical{K,TO} <: Number
ExtendedTropical{K}(orders)
Extended Tropical numbers with largest K
orders keeped, or the TruncatedPoly
without coefficients, TO
is the element type of orders, usually Tropical
numbers. This algebra maps
+
to finding largestK
values of union of two sets.*
to finding largestK
values of sum combination of two sets.0
to set [-Inf, -Inf, ..., -Inf, -Inf]1
to set [-Inf, -Inf, ..., -Inf, 0]
Fields
orders
is a vector ofTropical
(CountingTropical
) numbers as the largest-K solution sizes (solutions).
Examples
julia> x = ExtendedTropical{3}(Tropical.([1.0, 2, 3]))
ExtendedTropical{3, Tropical{Float64}}(Tropical{Float64}[1.0ₜ, 2.0ₜ, 3.0ₜ])
julia> y = ExtendedTropical{3}(Tropical.([-Inf, 2, 5]))
ExtendedTropical{3, Tropical{Float64}}(Tropical{Float64}[-Infₜ, 2.0ₜ, 5.0ₜ])
julia> x * y
ExtendedTropical{3, Tropical{Float64}}(Tropical{Float64}[6.0ₜ, 7.0ₜ, 8.0ₜ])
julia> x + y
ExtendedTropical{3, Tropical{Float64}}(Tropical{Float64}[2.0ₜ, 3.0ₜ, 5.0ₜ])
julia> one(x)
ExtendedTropical{3, Tropical{Float64}}(Tropical{Float64}[-Infₜ, -Infₜ, 0.0ₜ])
julia> zero(x)
ExtendedTropical{3, Tropical{Float64}}(Tropical{Float64}[-Infₜ, -Infₜ, -Infₜ])
GenericTensorNetworks.Mods.Mod
— TypeMod{m}(v)
creates a modular number in mod m
with value mod(v,m)
.
GenericTensorNetworks.TruncatedPoly
— TypeTruncatedPoly{K,T,TO} <: Number
TruncatedPoly(coeffs::Tuple, maxorder)
Polynomial truncated to largest K
orders. T
is the coefficients type and TO
is the orders type.
Fields
coeffs
is the largest-K coefficients of a polynomial. InGenericTensorNetworks
, it can be the counting or enumeration of solutions.maxorder
is the order of a polynomial.
Examples
julia> TruncatedPoly((1,2,3), 6)
x^4 + 2*x^5 + 3*x^6
julia> TruncatedPoly((1,2,3), 6) * TruncatedPoly((5,2,1), 3)
20*x^7 + 8*x^8 + 3*x^9
julia> TruncatedPoly((1,2,3), 6) + TruncatedPoly((5,2,1), 3)
x^4 + 2*x^5 + 3*x^6
GenericTensorNetworks.Max2Poly
— TypeMax2Poly{T,TO} = TruncatedPoly{2,T,TO}
Max2Poly(a, b, maxorder)
A shorthand of TruncatedPoly
{2}.
GenericTensorNetworks.ConfigEnumerator
— TypeConfigEnumerator{N,S,C} <: AbstractSetNumber
Set algebra for enumerating configurations, where N
is the length of configurations, C
is the size of storage in unit of UInt64
, S
is the bit width to store a single element in a configuration, i.e. log2(# of flavors)
, for bitstrings, it is 1
`.
Fields
data
is a vector ofStaticElementVector
as the solution set.
Examples
julia> a = ConfigEnumerator([StaticBitVector([1,1,1,0,0]), StaticBitVector([1,0,0,0,1])])
{11100, 10001}
julia> b = ConfigEnumerator([StaticBitVector([0,0,0,0,0]), StaticBitVector([1,0,1,0,1])])
{00000, 10101}
julia> a + b
{11100, 10001, 00000, 10101}
julia> one(a)
{00000}
julia> zero(a)
{}
GenericTensorNetworks.SumProductTree
— TypeSumProductTree{ET} <: AbstractSetNumber
Configuration enumerator encoded in a tree, it is the most natural representation given by a sum-product network and is often more memory efficient than putting the configurations in a vector. One can use generate_samples
to sample configurations from this tree structure efficiently.
Fields
tag
is one ofZERO
,ONE
,LEAF
,SUM
,PROD
.data
is the element stored in aLEAF
node.left
andright
are two operands of aSUM
orPROD
node.
Examples
julia> s = SumProductTree(bv"00111")
00111
julia> q = SumProductTree(bv"10000")
10000
julia> x = s + q
+ (count = 2.0)
├─ 00111
└─ 10000
julia> y = x * x
* (count = 4.0)
├─ + (count = 2.0)
│ ├─ 00111
│ └─ 10000
└─ + (count = 2.0)
├─ 00111
└─ 10000
julia> collect(y)
4-element Vector{StaticBitVector{5, 1}}:
00111
10111
10111
10000
julia> zero(s)
∅
julia> one(s)
00000
GenericTensorNetworks.ConfigSampler
— TypeConfigSampler{N,S,C} <: AbstractSetNumber
ConfigSampler(elements::StaticElementVector)
The algebra for sampling one configuration, where N
is the length of configurations, C
is the size of storage in unit of UInt64
, S
is the bit width to store a single element in a configuration, i.e. log2(# of flavors)
, for bitstrings, it is 1
`.
ConfigSampler
is a probabilistic commutative semiring, adding two config samplers do not give you deterministic results.
Fields
data
is aStaticElementVector
as the sampled solution.
Examples
julia> ConfigSampler(StaticBitVector([1,1,1,0,0]))
ConfigSampler{5, 1, 1}(11100)
julia> ConfigSampler(StaticBitVector([1,1,1,0,0])) + ConfigSampler(StaticBitVector([1,0,1,0,0]))
ConfigSampler{5, 1, 1}(10100)
julia> ConfigSampler(StaticBitVector([1,1,1,0,0])) * ConfigSampler(StaticBitVector([0,0,0,0,1]))
ConfigSampler{5, 1, 1}(11101)
julia> one(ConfigSampler{5, 1, 1})
ConfigSampler{5, 1, 1}(00000)
julia> zero(ConfigSampler{5, 1, 1})
ConfigSampler{5, 1, 1}(11111)
GenericTensorNetworks
also exports the Polynomial
and LaurentPolynomial
types defined in package Polynomials
.
For reading the properties from the above element types, one can use the following functions.
GenericTensorNetworks.read_size
— Functionread_size(x)
Read the size information from the generic element.
GenericTensorNetworks.read_count
— Functionread_count(x)
Read the counting information from the generic element.
GenericTensorNetworks.read_config
— Functionread_config(x; keeptree=false)
Read the configuration information from the generic element, if keeptree=true
, the tree structure will not be flattened.
GenericTensorNetworks.read_size_count
— Functionread_size_count(x)
Read the size and counting information from the generic element.
GenericTensorNetworks.read_size_config
— Functionread_size_config(x; keeptree=false)
Read the size and configuration information from the generic element. If keeptree=true
, the tree structure will not be flattened.
The following functions are for saving and loading configurations.
GenericTensorNetworks.StaticBitVector
— TypeStaticBitVector{N,C} = StaticElementVector{N,1,C}
StaticBitVector(x::AbstractVector)
Examples
julia> sb = StaticBitVector([1,0,0,1,1])
10011
julia> sb[3]
0x0000000000000000
julia> collect(Int, sb)
5-element Vector{Int64}:
1
0
0
1
1
GenericTensorNetworks.StaticElementVector
— TypeStaticElementVector{N,S,C}
StaticElementVector(nflavor::Int, x::AbstractVector)
N
is the length of vector, C
is the size of storage in unit of UInt64
, S
is the stride defined as the log2(# of flavors)
. When the number of flavors is 2, it is a StaticBitVector
.
Fields
data
is a tuple ofUInt64
for storing the configuration of static elements.
Examples
julia> ev = StaticElementVector(3, [1,2,0,1,2])
12012
julia> ev[2]
0x0000000000000002
julia> collect(Int, ev)
5-element Vector{Int64}:
1
2
0
1
2
GenericTensorNetworks.OnehotVec
— TypeOnehotVec{N,NF}
OnehotVec{N,NF}(loc, val)
Onehot vector type, N
is the number of vector length, NF
is the number of flavors.
GenericTensorNetworks.save_configs
— Functionsave_configs(filename, data::ConfigEnumerator; format=:binary)
Save configurations data
to file filename
. The format is :binary
or :text
.
GenericTensorNetworks.load_configs
— Functionload_configs(filename; format=:binary, bitlength=nothing, nflavors=2)
Load configurations from file filename
. The format is :binary
or :text
. If the format is :binary
, the bitstring length bitlength
must be specified, nflavors
specifies the degree of freedom.
GenericTensorNetworks.save_sumproduct
— Functionsave_sumproduct(filename, t::SumProductTree)
Serialize a sum-product tree into a file.
GenericTensorNetworks.load_sumproduct
— Functionload_sumproduct(filename)
Deserialize a sum-product tree from a file.
GenericTensorNetworks.@bv_str
— MacroConstructing a static bit vector.
GenericTensorNetworks.onehotv
— Functiononehotv(::Type{<:StaticElementVector}, i, v)
onehotv(::Type{<:StaticBitVector, i)
Returns a static element vector, with the value at location i
being v
(1 if not specified).
GenericTensorNetworks.generate_samples
— Functiongenerate_samples(t::SumProductTree, nsamples::Int)
Direct sampling configurations from a SumProductTree
instance.
Examples
julia> using Graphs
julia> g= smallgraph(:petersen)
{10, 15} undirected simple Int64 graph
julia> t = solve(GenericTensorNetwork(IndependentSet(g)), ConfigsAll(; tree_storage=true))[];
julia> samples = generate_samples(t, 1000);
julia> all(s->is_independent_set(g, s), samples)
true
GenericTensorNetworks.hamming_distribution
— Functionhamming_distribution(S, T)
Compute the distribution of pair-wise Hamming distances, which is defined as:
\[c(k) := \sum_{\sigma\in S, \tau\in T} \delta({\rm dist}(\sigma, \tau), k)\]
where $\delta$ is a function that returns 1 if two arguments are equivalent, 0 otherwise, ${\rm dist}$ is the Hamming distance function.
Returns the counting as a vector.
Tensor Network
OMEinsumContractionOrders.optimize_code
— Functionoptimize_code(eincode, size_dict, optimizer = GreedyMethod(), simplifier=nothing, permute=true) -> optimized_eincode
Optimize the einsum contraction code and reduce the time/space complexity of tensor network contraction. Returns a NestedEinsum
instance. Input arguments are
eincode
is an einsum contraction code instance, one ofDynamicEinCode
,StaticEinCode
orNestedEinsum
.size
is a dictionary of "edge label=>edge size" that contains the size information, one can useuniformsize(eincode, 2)
to create a uniform size.optimizer
is aCodeOptimizer
instance, should be one ofGreedyMethod
,KaHyParBipartite
,SABipartite
orTreeSA
. Check their docstrings for details.simplifier
is one ofMergeVectors
orMergeGreedy
.- optimize the permutation if
permute
is true.
Examples
julia> using OMEinsum
julia> code = ein"ij, jk, kl, il->"
ij, jk, kl, il ->
julia> optimize_code(code, uniformsize(code, 2), TreeSA())
SlicedEinsum{Char, NestedEinsum{DynamicEinCode{Char}}}(Char[], ki, ki ->
├─ jk, ij -> ki
│ ├─ jk
│ └─ ij
└─ kl, il -> ki
├─ kl
└─ il
)
OMEinsum.getixsv
— Functiongetixsv(code)
Get labels of input tensors for EinCode
, NestedEinsum
and some other einsum like objects. Returns a vector of vectors.
julia> getixsv(ein"(ij,jk),k->i")
3-element Vector{Vector{Char}}:
['i', 'j']
['j', 'k']
['k']
OMEinsum.getiyv
— Functiongetiy(code)
Get labels of the output tensor for EinCode
, NestedEinsum
and some other einsum like objects. Returns a vector.
julia> getiyv(ein"(ij,jk),k->i")
1-element Vector{Char}:
'i': ASCII/Unicode U+0069 (category Ll: Letter, lowercase)
OMEinsumContractionOrders.contraction_complexity
— Functioncontraction_complexity(eincode, size_dict) -> ContractionComplexity
Returns the time, space and read-write complexity of the einsum contraction. The returned object contains 3 fields:
- time complexity
tc
defined aslog2(number of element-wise multiplications)
. - space complexity
sc
defined aslog2(size of the maximum intermediate tensor)
. - read-write complexity
rwc
defined aslog2(the number of read-write operations)
.
GenericTensorNetworks.estimate_memory
— Functionestimate_memory(
problem::GenericTensorNetwork,
property::GenericTensorNetworks.AbstractProperty;
T
) -> Any
Memory estimation in number of bytes to compute certain property
of a problem
. T
is the base type.
OMEinsum.@ein_str
— Macroein"ij,jk -> ik"(A,B)
String macro interface which understands numpy.einsum
's notation. Translates strings into StaticEinCode
-structs that can be called to evaluate an einsum
. To control evaluation order, use parentheses - instead of an EinCode
, a NestedEinsum
is returned which evaluates the expression according to parens. The valid character ranges for index-labels are a-z
and α-ω
.
example
julia> a, b, c = rand(10,10), rand(10,10), rand(10,1);
julia> ein"ij,jk,kl -> il"(a,b,c) ≈ ein"(ij,jk),kl -> il"(a,b,c) ≈ a * b * c
true
OMEinsumContractionOrders.GreedyMethod
— TypeGreedyMethod{MT}
GreedyMethod(; method=MinSpaceOut(), nrepeat=10)
The fast but poor greedy optimizer. Input arguments are
method
isMinSpaceDiff()
orMinSpaceOut
.MinSpaceOut
choose one of the contraction that produces a minimum output tensor size,MinSpaceDiff
choose one of the contraction that decrease the space most.
nrepeat
is the number of repeatition, returns the best contraction order.
OMEinsumContractionOrders.TreeSA
— TypeTreeSA{RT,IT,GM}
TreeSA(; sc_target=20, βs=collect(0.01:0.05:15), ntrials=10, niters=50,
sc_weight=1.0, rw_weight=0.2, initializer=:greedy, greedy_config=GreedyMethod(; nrepeat=1))
Optimize the einsum contraction pattern using the simulated annealing on tensor expression tree.
sc_target
is the target space complexity,ntrials
,βs
andniters
are annealing parameters, doingntrials
indepedent annealings, each has inverse tempteratures specified byβs
, in each temperature, doniters
updates of the tree.sc_weight
is the relative importance factor of space complexity in the loss compared with the time complexity.rw_weight
is the relative importance factor of memory read and write in the loss compared with the time complexity.initializer
specifies how to determine the initial configuration, it can be:greedy
or:random
. If it is using:greedy
method to generate the initial configuration, it also uses two extra argumentsgreedy_method
andgreedy_nrepeat
.nslices
is the number of sliced legs, default is 0.fixed_slices
is a vector of sliced legs, default is[]
.
References
OMEinsumContractionOrders.SABipartite
— TypeSABipartite{RT,BT}
SABipartite(; sc_target=25, ntrials=50, βs=0.1:0.2:15.0, niters=1000
max_group_size=40, greedy_config=GreedyMethod(), initializer=:random)
Optimize the einsum code contraction order using the Simulated Annealing bipartition + Greedy approach. This program first recursively cuts the tensors into several groups using simulated annealing, with maximum group size specifed by max_group_size
and maximum space complexity specified by sc_target
, Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are
size_dict
, a dictionary that specifies leg dimensions,sc_target
is the target space complexity, defined aslog2(number of elements in the largest tensor)
,max_group_size
is the maximum size that allowed to used greedy search,βs
is a list of inverse temperature1/T
,niters
is the number of iteration in each temperature,ntrials
is the number of repetition (with different random seeds),greedy_config
configures the greedy method,initializer
, the partition configuration initializer, one can choose:random
or:greedy
(slow but better).
References
OMEinsumContractionOrders.KaHyParBipartite
— TypeKaHyParBipartite{RT,IT,GM}
KaHyParBipartite(; sc_target, imbalances=collect(0.0:0.005:0.8),
max_group_size=40, greedy_config=GreedyMethod())
Optimize the einsum code contraction order using the KaHyPar + Greedy approach. This program first recursively cuts the tensors into several groups using KaHyPar, with maximum group size specifed by max_group_size
and maximum space complexity specified by sc_target
, Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are
sc_target
is the target space complexity, defined aslog2(number of elements in the largest tensor)
,imbalances
is a KaHyPar parameter that controls the group sizes in hierarchical bipartition,max_group_size
is the maximum size that allowed to used greedy search,greedy_config
is a greedy optimizer.
References
OMEinsumContractionOrders.MergeVectors
— TypeMergeVectors <: CodeSimplifier
MergeVectors()
Contraction code simplifier (in order to reduce the time of calling optimizers) that merges vectors to closest tensors.
OMEinsumContractionOrders.MergeGreedy
— TypeMergeGreedy <: CodeSimplifier
MergeGreedy(; threshhold=-1e-12)
Contraction code simplifier (in order to reduce the time of calling optimizers) that merges tensors greedily if the space complexity of merged tensors is reduced (difference smaller than the threshhold
).
Others
Graph
LuxorGraphPlot.show_graph
— Functionshow_graph([f, ]graph::AbstractGraph;
kwargs...
)
Show a graph in VSCode, Pluto or Jupyter notebook, or save it to a file.
Positional arguments
f
is a function that returns extraLuxor
plotting statements.graph
is a graph instance.locs
is a vector of tuples for specifying the vertex locations, or aAbstractLayout
instance.
Keyword arguments
config
is aGraphDisplayConfig
instance.vertex_colors
is a vector of color strings for specifying vertex fill colors.vertex_sizes
is a vector of real numbers for specifying vertex sizes.vertex_shapes
is a vector of strings for specifying vertex shapes, the string should be "circle" or "box".vertex_stroke_colors
is a vector of color strings for specifying vertex stroke colors.vertex_text_colors
is a vector of color strings for specifying vertex text colors.edge_colors
is a vector of color strings for specifying edge colors.texts
is a vector of strings for labeling vertices.
padding_left::Int = 10
, the padding on the left side of the drawingpadding_right::Int = 10
, the padding on the right side of the drawingpadding_top::Int = 10
, the padding on the top side of the drawingpadding_bottom::Int = 10
, the padding on the bottom side of the drawingformat
is the output format, which can be:svg
,:png
or:pdf
.filename
is a string as the output filename.
Example
julia> using Graphs, LuxorGraphPlot
julia> show_graph(smallgraph(:petersen); format=:png, vertex_colors=rand(["blue", "red"], 10));
GenericTensorNetworks.show_configs
— Functionshow_configs(gp::GraphProblem, locs, configs::AbstractMatrix; kwargs...)
show_configs(graph::SimpleGraph, locs, configs::AbstractMatrix; nflavor=2, kwargs...)
Show a gallery of configurations on a graph.
GenericTensorNetworks.show_einsum
— Functionshow_einsum(ein::AbstractEinsum;
tensor_locs=nothing,
label_locs=nothing, # dict
spring::Bool=true,
optimal_distance=25.0,
tensor_size=15,
tensor_color="black",
tensor_text_color="white",
annotate_tensors=false,
label_size=7,
label_color="black",
open_label_color="black",
annotate_labels=true,
kwargs...
)
Positional arguments
ein
is an Einsum contraction code (provided by packageOMEinsum
).
Keyword arguments
locs
is a tuple oftensor_locs
(vector) andlabel_locs
(dict).spring
is switch to use spring method to optimize the location.optimal_distance
is a optimal distance parameter forspring
optimizer.tensor_color
is a string to specify the color of tensor nodes.tensor_size
is a real number to specify the size of tensor nodes.tensor_text_color
is a color strings to specify tensor text color.annotate_tensors
is a boolean switch for annotate different tensors by integers.label_size
is a real number to specify label text node size.label_color
is a color strings to specify label text color.open_label_color
is a color strings to specify open label text color.annotate_labels
is a boolean switch for annotate different labels.format
is the output format, which can be:svg
,:png
or:pdf
.filename
is a string as the output filename.
fontsize::Float64 = 12.0
, the font sizefontface::String = ""
, the font face, leave empty to follow systemvertex_text_color = "black"
, the default text colorvertex_stroke_color = "black"
, the default stroke color for verticesvertex_color = "transparent"
, the default default fill color for verticesvertex_size::Float64 = 10.0
, the default vertex sizevertex_shape::Symbol = :circle
, the default vertex shape, which can be :circle, :box or :dotvertex_line_width::Float64 = 1
, the default vertex stroke line widthvertex_line_style::String = "solid"
, the line style of vertex stroke, which can be one of ["solid", "dotted", "dot", "dotdashed", "longdashed", "shortdashed", "dash", "dashed", "dotdotdashed", "dotdotdotdashed"]edge_color = "black"
, the default edge coloredge_line_width::Float64 = 1
, the default line widthedge_style::String = "solid"
, the line style of edges, which can be one of ["solid", "dotted", "dot", "dotdashed", "longdashed", "shortdashed", "dash", "dashed", "dotdotdashed", "dotdotdotdashed"]
GenericTensorNetworks.show_landscape
— Functionshow_landscape(is_neighbor, configurations::TruncatedPoly;
layer_distance=200,
config=GraphDisplayConfig(; edge_color="gray", vertex_stroke_color="transparent", vertex_size=5),
layout_method=:spring,
optimal_distance=30.0,
colors=fill("green", K),
kwargs...)
Show the energy landscape of configurations.
Arguments
is_neighbor
: a function to determine if two configurations are neighbors.configurations
: a TruncatedPoly object, which is the default output of thesolve
function withConfigsMax
property as the argument.
Keyword arguments
layer_distance
: the distance between layers.config
: aLuxorGraphPlot.GraphDisplayConfig
object.layout_method
: the layout method, either:spring
,:stress
or:spectral
optimal_distance
: the optimal distance for the layout.colors
: a vector of colors for each layer.kwargs...
: other keyword arguments passed toshow_graph
.
LuxorGraphPlot.GraphDisplayConfig
— TypeGraphDisplayConfig
The configuration for graph display.
Keyword arguments
locs
is a vector of tuples for specifying the vertex locations.edges
is a vector of tuples for specifying the edges.fontsize::Float64 = 12.0
, the font sizefontface::String = ""
, the font face, leave empty to follow systemvertex_text_color = "black"
, the default text colorvertex_stroke_color = "black"
, the default stroke color for verticesvertex_color = "transparent"
, the default default fill color for verticesvertex_size::Float64 = 10.0
, the default vertex sizevertex_shape::Symbol = :circle
, the default vertex shape, which can be :circle, :box or :dotvertex_line_width::Float64 = 1
, the default vertex stroke line widthvertex_line_style::String = "solid"
, the line style of vertex stroke, which can be one of ["solid", "dotted", "dot", "dotdashed", "longdashed", "shortdashed", "dash", "dashed", "dotdotdashed", "dotdotdotdashed"]edge_color = "black"
, the default edge coloredge_line_width::Float64 = 1
, the default line widthedge_style::String = "solid"
, the line style of edges, which can be one of ["solid", "dotted", "dot", "dotdashed", "longdashed", "shortdashed", "dash", "dashed", "dotdotdashed", "dotdotdotdashed"]
LuxorGraphPlot.Layouts.AbstractLayout
— TypeAbstractLayout
Abstract type for layout algorithms.
LuxorGraphPlot.Layouts.SpringLayout
— TypeSpringLayout <: AbstractLayout
A layout algorithm based on a spring model.
Fields
optimal_distance::Float64
: the optimal distance between verticesmaxiter::Int
: the maximum number of iterationsα0::Float64
: the initial moving speedmeta::Dict{Symbol, Any}
: graph dependent meta information, includinginitial_locs
: initial vertex locationsmask
: boolean mask for which vertices to relocate
LuxorGraphPlot.Layouts.StressLayout
— TypeStressLayout <: AbstractLayout
A layout algorithm based on stress majorization.
Fields
optimal_distance::Float64
: the optimal distance between verticesmaxiter::Int
: the maximum number of iterationsrtol::Float64
: the absolute toleranceinitial_locs
: initial vertex locationsmask
: boolean mask for which vertices to relocatemeta::Dict{Symbol, Any}
: graph dependent meta information, includinginitial_locs
: initial vertex locationsmask
: boolean mask for which vertices to relocate
LuxorGraphPlot.Layouts.SpectralLayout
— TypeSpectralLayout <: AbstractLayout
A layout algorithm based on spectral graph theory.
Fields
optimal_distance::Float64
: the optimal distance between verticesdimension::Int
: the number of dimensions
LuxorGraphPlot.Layouts.Layered
— TypeLayered <: AbstractLayout
Layered version of a parent layout algorithm.
Fields
parent::LT
: the parent layout algorithmzlocs::Vector{T}
: the z-axis locationsaspect_ratio::Float64
: the aspect ratio of the z-axis
LuxorGraphPlot.Layouts.LayeredSpringLayout
— FunctionLayeredSpringLayout(; zlocs, optimal_distance, aspect_ration=0.2)
Create a layered spring layout.
Keyword Arguments
zlocs
: the z-axis locationsoptimal_distance::Float64
: the optimal distance between verticesaspect_ration::Float64
: the aspect ratio of the z-axisα0::Float64
: the initial moving speedmaxiter::Int
: the maximum number of iterations
LuxorGraphPlot.Layouts.LayeredStressLayout
— FunctionLayeredStressLayout(; zlocs, optimal_distance, aspect_ration=0.2)
Create a layered stress layout.
Keyword Arguments
zlocs
: the z-axis locationsoptimal_distance::Float64
: the optimal distance between verticesaspect_ration::Float64
: the aspect ratio of the z-axismaxiter::Int
: the maximum number of iterationsrtol::Float64
: the absolute tolerance
LuxorGraphPlot.Layouts.render_locs
— Functionrender_locs(graph, layout::Layout)
Render the vertex locations for a graph from an AbstractLayout
instance.
Arguments
graph::AbstractGraph
: the graph to renderlayout::AbstractLayout
: the layout algorithm
GenericTensorNetworks.diagonal_coupled_graph
— Functiondiagonal_coupled_graph(mask::AbstractMatrix{Bool})
Create a masked diagonal coupled square lattice graph from a specified mask
.
GenericTensorNetworks.square_lattice_graph
— Functionsquare_lattice_graph(mask::AbstractMatrix{Bool})
Create a masked square lattice graph.
GenericTensorNetworks.unit_disk_graph
— Functionunit_disk_graph(locs::AbstractVector, unit::Real)
Create a unit disk graph with locations specified by locs
and unit distance unit
.
GenericTensorNetworks.line_graph
— Functionline_graph(g::SimpleGraph)
Returns the line graph of g
. The line graph is generated by mapping an edge to a vertex and two edges sharing a common vertex will be connected.
GenericTensorNetworks.random_diagonal_coupled_graph
— Functionrandom_diagonal_coupled_graph(m::Int, n::Int, ρ::Real)
Create a $m\times n$ random masked diagonal coupled square lattice graph, with number of vertices equal to $\lfloor m \times n\times \rho \rceil$.
GenericTensorNetworks.random_square_lattice_graph
— Functionrandom_square_lattice_graph(m::Int, n::Int, ρ::Real)
Create a random masked square lattice graph, with number of vertices fixed to $\lfloor mn\rho \rceil$.
One can also use random_regular_graph
and smallgraph
in Graphs to build special graphs.
Multiprocessing
GenericTensorNetworks.SimpleMultiprocessing.multiprocess_run
— Functionmultiprocess_run(func, inputs::AbstractVector)
Execute function func
on inputs
with multiple processing.
Example
Suppose we have a file run.jl
with the following contents
using GenericTensorNetworks.SimpleMultiprocessing
results = multiprocess_run(x->x^2, randn(8))
In an terminal, you may run the script with 4 processes by typing
$ julia -p4 run.jl
From worker 2: [ Info: running argument -0.17544008350172655 on device 2
From worker 5: [ Info: running argument 0.34578117779452555 on device 5
From worker 3: [ Info: running argument 2.0312551239727705 on device 3
From worker 4: [ Info: running argument -0.7319353419291961 on device 4
From worker 2: [ Info: running argument 0.013132180639054629 on device 2
From worker 3: [ Info: running argument 0.9960101782201602 on device 3
From worker 4: [ Info: running argument -0.5613942832743966 on device 4
From worker 5: [ Info: running argument 0.39460402723831134 on device 5