Skip to content

WeileiZeng/belief-propagation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

6cce941 · Apr 15, 2020

History

37 Commits
Apr 15, 2020
Apr 15, 2020
Apr 15, 2020
Apr 14, 2020
Apr 12, 2020
Mar 26, 2020
Apr 7, 2020
Apr 15, 2020
Apr 15, 2020
Apr 15, 2020
Apr 15, 2020
Apr 15, 2020
Apr 6, 2020
Apr 6, 2020
Apr 6, 2020
Apr 11, 2020
Mar 31, 2020
Apr 14, 2020
Apr 15, 2020
Mar 31, 2020
Apr 14, 2020
Apr 14, 2020
Apr 12, 2020
Apr 12, 2020
Apr 7, 2020

Repository files navigation

belief-propagation

Progress

  • Mar 30
    • change input method
    • change parallel thread from shell to c++
  • Apr 1
    • get run result and save
  • Apr 4
    • plot result.
    • plotted counts for errors with weight 0, 1, 2 respectively. Those counts are smaller than the converged error. Because in large-weight errors, there will be single error as well. Hence these counts are not meaningful. For a meaningful comparison, one should plot the weight of the errors, instead of counts. This comparison has been done before, hence not shown here. For reference see "Weilei Research Note.pdf"
    • check iteration, see improvement about 20-30% in converge rate. no threshold.
  • Apr 5
    • replace bp decoding using syndrome based and LLR simplied. Passing test with 7 qubit code
  • Apr 6
    • test repetition code
    • test toric code and check result.
    • compare with itpp LDPC_Code.bp_decode(). Most result are the same, but in toric code, error=[1 1 0 0 ... 0] get a different result, but error=[0 ... 0 1 1 0 ... 0] get the saem result. Not sure why and not sure if this produce a statistical different in lasrge number of tests
    • run full simulation on toric codes and compare. Itpp perform much better. It can even decode many double errors. The difference I oberserved in algorithm is that (a) itpp use intergers, which I thing only save some time to make it faster than float calculation. (b) BoxPlus. This might be an optimization, see ref
  • Apr 7
    • itpp result: no improvement using integer in convergence, but the program is at least 10 times faster. When using min sum (Dint2 =0), it get improvements and get faster again.
    • writing my own function. min sum, normalization. offset.
    • my min sun is slightly worse than Dint2=0
    • iteration 10 show similar big improvement percentage. Still relatively lower than corresponding itpp
    • write decoder as a class in head file, instead of functions
    • layered schedule itself show no improvement. But layered scheduling plus enhanced feedback show improvement with a factor greater than 10. Run it overnight to see if there is a threshold. Yehua's paper has threshold around 7%. Hence I am looking for the range around 1%

About

Using belief propagation to decode quantum codes

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published