Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release(v20.11): merge master to bring in latest changes #7076

Merged
merged 39 commits into from
Dec 7, 2020
Merged
Show file tree
Hide file tree
Changes from 37 commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
30ee966
fix(enterprise): Set version correctly post marshalling during restor…
rahulgurnani Dec 2, 2020
7c077e5
chore(test): Refactor GraphQL tests to use best practices (GRAPHQL-88…
abhimanyusinghgaur Dec 2, 2020
495dde5
add systest/backup/filesystem/data/backups dir used for tests (#7047)
NamanJain8 Dec 2, 2020
04bfd54
docs (chore): Minor adjustments to Ludicrous mode (#7041)
bucanero Dec 2, 2020
9a68461
bring in increased buffers for allocator from ristretto (#7046)
NamanJain8 Dec 2, 2020
fe35666
fix(badger): Update Badger to fix OOMs. (#7050)
danielmai Dec 3, 2020
9013d22
Upgrade Bleve to the latest (#7040)
chewxy Dec 3, 2020
e1e1e47
removed note about order for term and trigram since it has been fixed…
OmarAyo Dec 3, 2020
0fc947b
Docs (clarification) Note proper use and meaning of --replicas flag (…
aaroncarey Dec 3, 2020
8456ca4
docs (chore): adjust search-filtering.md format (#7056)
bucanero Dec 3, 2020
c0ce986
Fix typo in add.md in wiki. (#6723)
YukiKAJIHARA Dec 3, 2020
1fd6239
Update Dgraph authorization format in tutorial (#6442) (#7057)
bucanero Dec 3, 2020
cadf448
feat(metric): Add dgraph_memory_alloc_bytes to track jemalloc memory.…
danielmai Dec 3, 2020
bc41f51
[docs] : rephrase + added example to query logging section (#7032)
OmarAyo Dec 3, 2020
c33cfe8
Use stream framework in debug tool (#7060)
Dec 4, 2020
206e45f
feat(debug): Expose http pprof to debug tool. (#6032)
danielmai Dec 4, 2020
37fc45f
build: Ignore netlify build if changes not done in wiki (#7061)
mbj36 Dec 4, 2020
eb6f932
Remove path since base is set (#7063)
mbj36 Dec 4, 2020
c635ee6
Speed up debug tool for printing out keys
manishrjain Dec 4, 2020
130956e
perf(badger): Update Badger to fix optimization regression. (#7058)
danielmai Dec 4, 2020
982bb15
fix(metrics): Use LastCount for RaftAppliedIndex and MaxAssignedTs. (…
danielmai Dec 4, 2020
b0b8787
fix(debug): Set cache for encrypted directories. (#7067)
danielmai Dec 4, 2020
95e078a
feat(debug): Support rollup in debug tool (#7066)
Dec 4, 2020
38718c3
Debug: Allow a way to pass in hex prefix
manishrjain Dec 4, 2020
47599be
Fix(OOM): Use z.Allocator again (#7068)
manishrjain Dec 4, 2020
22e8d6d
Update go.mod to bring in latest Badger
manishrjain Dec 4, 2020
9d82788
Bring in jemalloc debugging.
manishrjain Dec 5, 2020
e923289
Fix go.mod
manishrjain Dec 5, 2020
cf64622
feat(metrics): Add metrics for size of applyCh and proposal size. (#7…
danielmai Dec 5, 2020
cd1f186
feat(zero): Bring in jemalloc debugging to Zero. (#7070)
danielmai Dec 5, 2020
6de7434
chore(compose): Add "make install" target.
danielmai Dec 5, 2020
cb23735
test: Fix health test. (#7071)
danielmai Dec 5, 2020
de8b762
Opt(query): Respect the first N argument (#7072)
manishrjain Dec 5, 2020
6347f29
fix: Check for nil ServerCloser in shutdown handler (#7048)
ajeetdsouza Dec 7, 2020
a6f3504
fix(GraphQL): This PR allows repetition of fields inside implementing…
JatinDev543 Dec 7, 2020
294ff8a
update badger to bring in minor fixes (#7075)
NamanJain8 Dec 7, 2020
c301315
Merge branch 'master' into naman/merge-master-rc3
NamanJain8 Dec 7, 2020
fc19ba4
fix(Query): Fix incorrect RDF response. (#7017)
minhaj-shakeel Dec 7, 2020
0d85513
fix(raftwal): Pass the encryption key instead of reading from WorkerC…
ahsanbarkati Dec 7, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion compose/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,12 @@ BUILD_FLAGS ?= "-N -l"
.PHONY: all
all: install_dgraph $(BIN)

.PHONY: install
install: all

.PHONY: install_dgraph
install_dgraph:
$(MAKE) -C .. install
$(MAKE) -C ../dgraph install

$(BIN): compose.go
go build -gcflags=$(BUILD_FLAGS) -o $(BIN)
Expand Down
1 change: 1 addition & 0 deletions dgraph/cmd/alpha/metrics_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@ func TestMetrics(t *testing.T) {

// Dgraph Memory Metrics
"dgraph_memory_idle_bytes", "dgraph_memory_inuse_bytes", "dgraph_memory_proc_bytes",
"dgraph_memory_alloc_bytes",
// Dgraph Activity Metrics
"dgraph_active_mutations_total", "dgraph_pending_proposals_total",
"dgraph_pending_queries_total",
Expand Down
15 changes: 12 additions & 3 deletions dgraph/cmd/alpha/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -425,6 +425,7 @@ func setupServer(closer *z.Closer) {
http.HandleFunc("/alter", alterHandler)
http.HandleFunc("/health", healthCheck)
http.HandleFunc("/state", stateHandler)
http.HandleFunc("/jemalloc", x.JemallocHandler)

// TODO: Figure out what this is for?
http.HandleFunc("/debug/store", storeStatsHandler)
Expand All @@ -434,6 +435,7 @@ func setupServer(closer *z.Closer) {
// Global Epoch is a lockless synchronization mechanism for graphql service.
// It's is just an atomic counter used by the graphql subscription to update its state.
// It's is used to detect the schema changes and server exit.
// It is also reported by /probe/graphql endpoint as the schemaUpdateCounter.

// Implementation for schema change:
// The global epoch is incremented when there is a schema change.
Expand All @@ -458,7 +460,8 @@ func setupServer(closer *z.Closer) {
}
w.WriteHeader(httpStatusCode)
w.Header().Set("Content-Type", "application/json")
x.Check2(w.Write([]byte(fmt.Sprintf(`{"status":"%s"}`, healthStatus.StatusMsg))))
x.Check2(w.Write([]byte(fmt.Sprintf(`{"status":"%s","schemaUpdateCounter":%d}`,
healthStatus.StatusMsg, atomic.LoadUint64(&globalEpoch)))))
})
http.Handle("/admin", allowedMethodsHandler(allowedMethods{
http.MethodGet: true,
Expand Down Expand Up @@ -726,10 +729,16 @@ func run() {
go func() {
var numShutDownSig int
for range sdCh {
closer := admin.ServerCloser
if closer == nil {
glog.Infoln("Caught Ctrl-C. Terminating now.")
os.Exit(1)
}

select {
case <-admin.ServerCloser.HasBeenClosed():
case <-closer.HasBeenClosed():
default:
admin.ServerCloser.Signal()
closer.Signal()
}
numShutDownSig++
glog.Infoln("Caught Ctrl-C. Terminating now (this may take a few seconds)...")
Expand Down
131 changes: 112 additions & 19 deletions dgraph/cmd/debug/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,26 @@
package debug

import (
"bufio"
"bytes"
"context"
"encoding/hex"
"fmt"
"io"
"log"
"math"
"net/http"
_ "net/http/pprof"
"os"
"sort"
"strconv"
"strings"
"sync/atomic"

"github.com/dgraph-io/badger/v2"
bpb "github.com/dgraph-io/badger/v2/pb"
"github.com/dgraph-io/ristretto/z"

"github.com/dgraph-io/dgraph/codec"
"github.com/dgraph-io/dgraph/ee/enc"
"github.com/dgraph-io/dgraph/posting"
Expand All @@ -47,8 +56,10 @@ var (
type flagOptions struct {
vals bool
keyLookup string
rollupKey string
keyHistory bool
predicate string
prefix string
readOnly bool
pdir string
itemMeta bool
Expand Down Expand Up @@ -83,7 +94,9 @@ func init() {
flag.Uint64Var(&opt.readTs, "at", math.MaxUint64, "Set read timestamp for all txns.")
flag.BoolVarP(&opt.readOnly, "readonly", "o", true, "Open in read only mode.")
flag.StringVarP(&opt.predicate, "pred", "r", "", "Only output specified predicate.")
flag.StringVarP(&opt.prefix, "prefix", "", "", "Uses a hex prefix.")
flag.StringVarP(&opt.keyLookup, "lookup", "l", "", "Hex of key to lookup.")
flag.StringVar(&opt.rollupKey, "rollup", "", "Hex of key to rollup.")
flag.BoolVarP(&opt.keyHistory, "history", "y", false, "Show all versions of a key.")
flag.StringVarP(&opt.pdir, "postings", "p", "", "Directory where posting lists are stored.")
flag.BoolVar(&opt.sizeHistogram, "histogram", false,
Expand Down Expand Up @@ -432,6 +445,43 @@ func appendPosting(w io.Writer, o *pb.Posting) {
}
fmt.Fprintln(w, "")
}
func rollupKey(db *badger.DB) {
txn := db.NewTransactionAt(opt.readTs, false)
defer txn.Discard()

key, err := hex.DecodeString(opt.rollupKey)
x.Check(err)

iopts := badger.DefaultIteratorOptions
iopts.AllVersions = true
iopts.PrefetchValues = false
itr := txn.NewKeyIterator(key, iopts)
defer itr.Close()

itr.Rewind()
if !itr.Valid() {
log.Fatalf("Unable to seek to key: %s", hex.Dump(key))
}

item := itr.Item()
// Don't need to do anything if the bitdelta is not set.
if item.UserMeta()&posting.BitDeltaPosting == 0 {
fmt.Printf("First item has UserMeta:[b%04b]. Nothing to do\n", item.UserMeta())
return
}
pl, err := posting.ReadPostingList(item.KeyCopy(nil), itr)
x.Check(err)

alloc := z.NewAllocator(32 << 20)
defer alloc.Release()

kvs, err := pl.Rollup(alloc)
x.Check(err)

wb := db.NewManagedWriteBatch()
x.Check(wb.WriteList(&bpb.KVList{Kv: kvs}))
x.Check(wb.Flush())
}

func lookup(db *badger.DB) {
txn := db.NewTransactionAt(opt.readTs, false)
Expand Down Expand Up @@ -487,26 +537,20 @@ func lookup(db *badger.DB) {
// {i} attr: name term: [8] woods ts: 535 item: [28, b0100] sz: 81 dcnt: 3 key: 00000...6f6f6473
// Fix the TestBulkLoadMultiShard accordingly, if the format changes.
func printKeys(db *badger.DB) {
txn := db.NewTransactionAt(opt.readTs, false)
defer txn.Discard()

iopts := badger.DefaultIteratorOptions
iopts.AllVersions = true
iopts.PrefetchValues = false
itr := txn.NewIterator(iopts)
defer itr.Close()

var prefix []byte
if len(opt.predicate) > 0 {
prefix = x.PredicatePrefix(opt.predicate)
} else if len(opt.prefix) > 0 {
p, err := hex.DecodeString(opt.prefix)
x.Check(err)
prefix = p
}

fmt.Printf("prefix = %s\n", hex.Dump(prefix))
var loop int
for itr.Seek(prefix); itr.ValidForPrefix(prefix); {
stream := db.NewStreamAt(opt.readTs)
stream.Prefix = prefix
var total uint64
stream.KeyToList = func(key []byte, itr *badger.Iterator) (*bpb.KVList, error) {
item := itr.Item()
var key []byte
key = item.KeyCopy(key)
pk, err := x.Parse(key)
x.Check(err)
var buf bytes.Buffer
Expand Down Expand Up @@ -544,6 +588,7 @@ func printKeys(db *badger.DB) {
}

var sz, deltaCount int64
LOOP:
for ; itr.ValidForPrefix(prefix); itr.Next() {
item := itr.Item()
if !bytes.Equal(item.Key(), key) {
Expand All @@ -557,7 +602,7 @@ func printKeys(db *badger.DB) {
// This is rather a default case as one of the 4 bit must be set.
case posting.BitCompletePosting, posting.BitEmptyPosting, posting.BitSchemaPosting:
sz += item.EstimatedSize()
break
break LOOP
case posting.BitDeltaPosting:
sz += item.EstimatedSize()
deltaCount++
Expand All @@ -569,20 +614,54 @@ func printKeys(db *badger.DB) {
break
}
}
var invalidSz, invalidCount uint64
// skip all the versions of key
for ; itr.ValidForPrefix(prefix); itr.Next() {
item := itr.Item()
if !bytes.Equal(item.Key(), key) {
break
}
invalidSz += uint64(item.EstimatedSize())
invalidCount++
}

fmt.Fprintf(&buf, " sz: %d dcnt: %d", sz, deltaCount)
if invalidCount > 0 {
fmt.Fprintf(&buf, " isz: %d icount: %d", invalidSz, invalidCount)
}
fmt.Fprintf(&buf, " key: %s", hex.EncodeToString(key))
fmt.Println(buf.String())
loop++
// If total size is more than 1 GB or we have more than 1 million keys, flag this key.
if uint64(sz)+invalidSz > (1<<30) || uint64(deltaCount)+invalidCount > 10e6 {
fmt.Fprintf(&buf, " [HEAVY]")
}
buf.WriteRune('\n')
list := &bpb.KVList{}
list.Kv = append(list.Kv, &bpb.KV{
Value: buf.Bytes(),
})
// Don't call fmt.Println here. It is much slower.
return list, nil
}
fmt.Printf("Found %d keys\n", loop)

w := bufio.NewWriterSize(os.Stdout, 16<<20)
stream.Send = func(buf *z.Buffer) error {
var count int
err := buf.SliceIterate(func(s []byte) error {
var kv bpb.KV
if err := kv.Unmarshal(s); err != nil {
return err
}
x.Check2(w.Write(kv.Value))
count++
return nil
})
atomic.AddUint64(&total, uint64(count))
return err
}
x.Check(stream.Orchestrate(context.Background()))
w.Flush()
fmt.Println()
fmt.Printf("Found %d keys\n", atomic.LoadUint64(&total))
}

// Creates bounds for an histogram. The bounds are powers of two of the form
Expand Down Expand Up @@ -792,6 +871,16 @@ func printZeroProposal(buf *bytes.Buffer, zpr *pb.ZeroProposal) {
}

func run() {
go func() {
for i := 8080; i < 9080; i++ {
fmt.Printf("Listening for /debug HTTP requests at port: %d\n", i)
if err := http.ListenAndServe(fmt.Sprintf("localhost:%d", i), nil); err != nil {
fmt.Println("Port busy. Trying another one...")
continue
}
}
}()

var err error
dir := opt.pdir
isWal := false
Expand All @@ -806,7 +895,9 @@ func run() {

bopts := badger.DefaultOptions(dir).
WithReadOnly(opt.readOnly).
WithEncryptionKey(opt.key)
WithEncryptionKey(opt.key).
WithBlockCacheSize(1 << 30).
WithIndexCacheSize(1 << 30)

x.AssertTruef(len(bopts.Dir) > 0, "No posting or wal dir specified.")
fmt.Printf("Opening DB: %s\n", bopts.Dir)
Expand Down Expand Up @@ -853,6 +944,8 @@ func run() {
// fmt.Printf("Min commit: %d. Max commit: %d, w.r.t %d\n", min, max, opt.readTs)

switch {
case len(opt.rollupKey) > 0:
rollupKey(db)
case len(opt.keyLookup) > 0:
lookup(db)
case len(opt.jepsen) > 0:
Expand Down
1 change: 1 addition & 0 deletions dgraph/cmd/zero/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -247,6 +247,7 @@ func run() {
http.HandleFunc("/moveTablet", st.moveTablet)
http.HandleFunc("/assign", st.assign)
http.HandleFunc("/enterpriseLicense", st.applyEnterpriseLicense)
http.HandleFunc("/jemalloc", x.JemallocHandler)
zpages.Handle(http.DefaultServeMux, "/z")

// This must be here. It does not work if placed before Grpc init.
Expand Down
4 changes: 3 additions & 1 deletion dgraph/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -101,9 +101,11 @@ func main() {
// Run the program.
cmd.Execute()
ticker.Stop()

glog.V(2).Infof("Num Allocated Bytes at program end: %d", z.NumAllocBytes())
if z.NumAllocBytes() > 0 {
glog.Warningf("MEMORY LEAK detected of size: %s\n",
humanize.Bytes(uint64(z.NumAllocBytes())))
z.PrintLeaks()
glog.Warningf("%s", z.Leaks())
}
}
22 changes: 12 additions & 10 deletions edgraph/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -937,18 +937,20 @@ func (s *Server) Health(ctx context.Context, all bool) (*api.Response, error) {
healthAll = append(healthAll, p.HealthInfo())
}
}

// Append self.
healthAll = append(healthAll, pb.HealthInfo{
Instance: "alpha",
Address: x.WorkerConfig.MyAddr,
Status: "healthy",
Group: strconv.Itoa(int(worker.GroupId())),
Version: x.Version(),
Uptime: int64(time.Since(x.WorkerConfig.StartTime) / time.Second),
LastEcho: time.Now().Unix(),
Ongoing: worker.GetOngoingTasks(),
Indexing: schema.GetIndexingPredicates(),
EeFeatures: ee.GetEEFeaturesList(),
Instance: "alpha",
Address: x.WorkerConfig.MyAddr,
Status: "healthy",
Group: strconv.Itoa(int(worker.GroupId())),
Version: x.Version(),
Uptime: int64(time.Since(x.WorkerConfig.StartTime) / time.Second),
LastEcho: time.Now().Unix(),
Ongoing: worker.GetOngoingTasks(),
Indexing: schema.GetIndexingPredicates(),
EeFeatures: ee.GetEEFeaturesList(),
MaxAssigned: posting.Oracle().MaxAssigned(),
})

var err error
Expand Down
Loading