Hello,

yes, I believe I understand. What's puzzling is that I should be able to
reproduce your problem using database. Would you mind sending me again a
tarball of the /var/lib/lxd/database directory of a LXD which is
currently broken? Just to double check. I don't have any other idea atm.

Pierre Couderc <[email protected]> writes:

> When I start a new clean lxd instance, I can lxd init, launch first.
> Then I try to work, it works, I success import from other lxd.
> At some point, some lxc command fails, such as lxc copy (local).
> Then nothing works more, any lxc command get soxket error.
> If I reboot, "lxc ls" gives the same error and the messages that I have sent.
> I hope I am clear...
>
>    Le jeudi 30 août 2018 à 14:07:19 UTC+2, Free Ekanayaka 
> <[email protected]> a écrit :  
>  
>  I have a few questions:
>
> 1) Does the failure happen when you start with a fresh lxd instance?
>
> 2) If the answer to 1) is "no", is there are repeatable process that you
>   have that brings you from a fresh lxd instance to the point were it
>   crashes with the failure you pasted?
>
> 3) Regardless of the answers to 1) and 2), does the failure happen
>   consistently? I.e. does it happen every time you run "lxc ls".
>
> Free
>
> Pierre Couderc <[email protected]> writes:
>
>> I am with lasts releases from git. For dqlite, last log is:
>>
>> commit f160665d9e50e39d156591546732a2e0b3712f73
>> Author: Free Ekanayaka <[email protected]>
>> Date:   Mon Aug 20 19:04:10 2018 +0200
>>
>> Mmm, I can send you again my tarball but it will be the same as I did send 
>> you before...
>>  It seems th eproblem is linked with my computer... Maybe I could enavle 
>> some traces on my computer ?
>>
>>
>>
>>    Le jeudi 30 août 2018 à 13:02:44 UTC+2, Free Ekanayaka 
>><[email protected]> a écrit :  
>>  
>>  Hello,
>>
>> this seems the same failure you reported earlier (thread with subject
>> "lxd refuses to start ...").
>>
>> When you sent me the database tarball last time, I didn't see any issue
>> and I could not reproduce the failure. Can you please double check that
>> your version of the dqlite C library is up to date (tag v0.2.2) and the
>> go-dqlite git close under GOPATH actually points to the master version
>> on github? Just run "git status" under 
>> $GOPATH/github.com/CanonicalLtd/go-dqlite
>> and compare it with github.
>>
>> If all your dependencies turn out to be up-to-date, you may want to
>> again send me a tarball of your /var/lib/lxd/database directory, and
>> I'll double check too.
>>
>> Free
>>
>> Pierre Couderc <[email protected]> writes:
>>
>>> Currently I heve many instabilities with lxd.
>>> When I try to start it, I get :
>>> nous@couderc:~$ export GOPATH=~/gonous@couderc:~$ sudo -E -sroot@couderc:~# 
>>> echo 
>>> $LD_LIBRARY_PATH/home/nous/go/deps/sqlite/.libs/:/home/nous/go/deps/dqlite/.libs/root@couderc:~#
>>>  cd go/binroot@couderc:~/go/bin# lsdeps  fuidshift  lxc  lxc-to-lxd  lxd  
>>> lxd-benchmark  lxd-p2c  macaroon-identityroot@couderc:~/go/bin# nohup lxd 
>>> --group sudo &[1] 1202root@couderc:~/go/bin# nohup: les entrées sont 
>>> ignorées et la sortie est ajoutée à 'nohup.out'lsdeps  fuidshift  lxc  
>>> lxc-to-lxd  lxd  lxd-benchmark  lxd-p2c  macaroon-identity  nohup.out[1]+  
>>> Termine 2               nohup lxd --group sudoroot@couderc:~/go/bin# lxc 
>>> lsError: Get http://unix.socket/1.0: dial unix /var/lib/lxd/unix.socket: 
>>> connect: connection refusedroot@couderc:~/go/bin# cat nohup.outlvl=warn 
>>> msg="AppArmor support has been disabled because of lack of kernel support" 
>>> t=2018-08-30T12:23:21+0200lvl=warn msg="CGroup memory swap accounting is 
>>> disabled, swap limits will be ignored." t=2018-08-30T12:23:21+0200panic: 
>>> unknown data type
>>> goroutine 1 
>>> [running]:github.com/CanonicalLtd/go-dqlite/internal/client.(*Rows).Next(0xc42000d660,
>>>  0xc4203ec6c0, 0x3, 0x3, 0xc420044070, 0xc4204b8bd0)        
>>> /home/nous/go/src/github.com/CanonicalLtd/go-dqlite/internal/client/message.go:549
>>>  +0x914github.com/CanonicalLtd/go-dqlite.(*Rows).Next(0xc42000d660, 
>>> 0xc4203ec6c0, 0x3, 0x3, 0xf24e40, 0xc4200dd268)        
>>> /home/nous/go/src/github.com/CanonicalLtd/go-dqlite/driver.go:515 
>>> +0x4bdatabase/sql.(*Rows).nextLocked(0xc4201ecc00, 0xc420240000)        
>>> /usr/lib/go-1.10/src/database/sql/sql.go:2622 
>>> +0xc4database/sql.(*Rows).Next.func1()        
>>> /usr/lib/go-1.10/src/database/sql/sql.go:2600 
>>> +0x3cdatabase/sql.withLock(0x11fa640, 0xc4201ecc30, 0xc4204b8c88)        
>>> /usr/lib/go-1.10/src/database/sql/sql.go:3032 
>>> +0x63database/sql.(*Rows).Next(0xc4201ecc00, 0xc4203ed080)        
>>> /usr/lib/go-1.10/src/database/sql/sql.go:2599 
>>> +0x7agithub.com/lxc/lxd/lxd/db/query.SelectObjects(0xc4201eca00, 
>>> 0xc4203e9c70, 0xc4203ba000, 0xc0, 0xc4203e9b70, 0x1, 0x1, 0x0, 0x0)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/query/objects.go:18 
>>> +0xdagithub.com/lxc/lxd/lxd/db.(*ClusterTx).containerArgsList(0xc4203e9b30, 
>>> 0x1201201, 0xc4200ba030, 0x0, 0xc420270101, 0xc4201eca00, 0x0)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:442 
>>> +0x5a7github.com/lxc/lxd/lxd/db.(*ClusterTx).ContainerArgsNodeList(0xc4203e9b30,
>>>  0x0, 0x0, 0xc4204b90a8, 0x771d7c, 0xc420018dc0)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/containers.go:347 
>>> +0x30main.containerLoadNodeAll.func1(0xc4203e9b30, 0x0, 0x0)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/container.go:1200 
>>> +0x38github.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1.1(0xc4201eca00, 
>>> 0xc4201eca00, 0x0)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:309 
>>> +0x42github.com/lxc/lxd/lxd/db/query.Transaction(0xc420018dc0, 
>>> 0xc4204b9130, 0x7, 0x8)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/query/transaction.go:17 
>>> +0x5agithub.com/lxc/lxd/lxd/db.(*Cluster).transaction.func1(0x7f3554e01000, 
>>> 0x0)        /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:307 
>>> +0x55github.com/lxc/lxd/lxd/db/query.Retry(0xc4204b91e0, 0xc4203e9b30, 
>>> 0x434b69)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/query/retry.go:20 
>>> +0xaegithub.com/lxc/lxd/lxd/db.(*Cluster).transaction(0xc42026ca50, 
>>> 0xc4204b9290, 0xc42026ca60, 0xc420272b60)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:306 
>>> +0x6dgithub.com/lxc/lxd/lxd/db.(*Cluster).Transaction(0xc42026ca50, 
>>> 0xc4204b9290, 0x0, 0x0)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/db/db.go:270 
>>> +0x80main.containerLoadNodeAll(0xc4203ec420, 0x1902720, 0x4, 0xc42003e270, 
>>> 0x2b, 0x1928910)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/container.go:1198 
>>> +0x67main.deviceInotifyDirRescan(0xc4203ec420)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/devices.go:1844 
>>> +0x43main.(*Daemon).init(0xc4202c2750, 0xc4202a78f0, 0x40e446)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/daemon.go:628 
>>> +0x13c4main.(*Daemon).Init(0xc4202c2750, 0xc4202c2750, 0xc420092a80)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/daemon.go:363 
>>> +0x2fmain.(*cmdDaemon).Run(0xc420272980, 0xc4202b8500, 0xc420272a80, 0x0, 
>>> 0x2, 0x0, 0x0)        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/main_daemon.go:61 
>>> +0x266main.(*cmdDaemon).Run-fm(0xc4202b8500, 0xc420272a80, 0x0, 0x2, 0x0, 
>>> 0x0)        /home/nous/go/src/github.com/lxc/lxd/lxd/main_daemon.go:36 
>>> +0x52github.com/spf13/cobra.(*Command).execute(0xc4202b8500, 0xc4200a4160, 
>>> 0x2, 0x2, 0xc4202b8500, 0xc4200a4160)        
>>> /home/nous/go/src/github.com/spf13/cobra/command.go:762 
>>> +0x468github.com/spf13/cobra.(*Command).ExecuteC(0xc4202b8500, 0x0, 
>>> 0xc4202c0c80, 0xc4202c0c80)        
>>> /home/nous/go/src/github.com/spf13/cobra/command.go:852 
>>> +0x30agithub.com/spf13/cobra.(*Command).Execute(0xc4202b8500, 0xc4202a7e00, 
>>> 0x1)        /home/nous/go/src/github.com/spf13/cobra/command.go:800 
>>> +0x2bmain.main()        
>>> /home/nous/go/src/github.com/lxc/lxd/lxd/main.go:164 
>>> +0xea3root@couderc:~/go/bin# ^Croot@couderc:~/go/bin#
>>>
>>> The only way I have found is to renitialze and import every container, and 
>>> only when it wants....
>>> Thank you for any help
>>> PC
>>> _______________________________________________
>>> lxc-users mailing list
>>> [email protected]
>>> http://lists.linuxcontainers.org/listinfo/lxc-users    
_______________________________________________
lxc-users mailing list
[email protected]
http://lists.linuxcontainers.org/listinfo/lxc-users

Reply via email to