Skip to content

Moss panic with runaway disk usage #61

@connorgorman

Description

@connorgorman

Hey everyone, I'm seeing the following panic which looks like it's run out of memory. However, I've attached a graph of Grafana which shows the disk usage by the Moss index (its from a file walker so it could have a race condition if Moss is generating lots of files while it is walking)

fatal error: runtime: cannot allocate memory

goroutine 103 [running]:
runtime.systemstack_switch()
	stdlib%/src/runtime/asm_amd64.s:311 fp=0xc000b75970 sp=0xc000b75968 pc=0x459c90
runtime.persistentalloc(0xd0, 0x0, 0x27ad2b0, 0x7c4eac)
	GOROOT/src/runtime/malloc.go:1142 +0x82 fp=0xc000b759b8 sp=0xc000b75970 pc=0x40c932
runtime.newBucket(0x1, 0x4, 0x425f76)
	GOROOT/src/runtime/mprof.go:173 +0x5e fp=0xc000b759f0 sp=0xc000b759b8 pc=0x42573e
runtime.stkbucket(0x1, 0x33a000, 0xc000b75a98, 0x4, 0x20, 0xc000b75a01, 0x7f08c8658138)
	GOROOT/src/runtime/mprof.go:240 +0x1aa fp=0xc000b75a50 sp=0xc000b759f0 pc=0x425a3a
runtime.mProf_Malloc(0xc01298a000, 0x33a000)
	GOROOT/src/runtime/mprof.go:344 +0xd6 fp=0xc000b75bc8 sp=0xc000b75a50 pc=0x425fd6
runtime.profilealloc(0xc0026e8000, 0xc01298a000, 0x33a000)
	GOROOT/src/runtime/malloc.go:1058 +0x4b fp=0xc000b75be8 sp=0xc000b75bc8 pc=0x40c6cb
runtime.mallocgc(0x33a000, 0x14f3080, 0x1, 0xc008fb0000)
	GOROOT/src/runtime/malloc.go:983 +0x46c fp=0xc000b75c88 sp=0xc000b75be8 pc=0x40bdac
runtime.makeslice(0x14f3080, 0x0, 0x338b32, 0xc008fb0000, 0x0, 0x17ec0)
	GOROOT/src/runtime/slice.go:70 +0x77 fp=0xc000b75cb8 sp=0xc000b75c88 pc=0x442c17
vendor/github.com/couchbase/moss.newSegment(...)
	vendor/github.com/couchbase/moss/segment.go:158
vendor/github.com/couchbase/moss.(*segmentStack).merge(0xc005eaf180, 0xc000b75e01, 0xc007dec910, 0xc002d71a90, 0xc00004bc90, 0x10, 0xc00004bcb)
	vendor/github.com/couchbase/moss/segment_stack_merge.go:73 +0x1bb fp=0xc000b75e48 sp=0xc000b75cb8 pc=0xcb199b
vendor/github.com/couchbase/moss.(*collection).mergerMain(0xc0004b00c0, 0xc005eaf180, 0xc007dec910, 0x1, 0xc005eaf180)
	vendor/github.com/couchbase/moss/collection_merger.go:248 +0x306 fp=0xc000b75ef0 sp=0xc000b75e48 pc=0xca6946
vendor/github.com/couchbase/moss.(*collection).runMerger(0xc0004b00c0)
	vendor/github.com/couchbase/moss/collection_merger.go:126 +0x2d0 fp=0xc000b75fd8 sp=0xc000b75ef0 pc=0xca5e30
runtime.goexit()
	stdlib%/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc000b75fe0 sp=0xc000b75fd8 pc=0x45bbf1
created by vendor/github.com/couchbase/moss.(*collection).Start
	vendor/github.com/couchbase/moss/collection.go:118 +0x62

screen shot 2018-10-30 at 11 07 46 pm

The disk usage grows for about 6 minutes then implodes when I assume the disk is completely filled. The green line after the blip is our service restarting and our indexes being rebuilt

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions