Hey guys! I'm facing a problem with keeping SuperT...
# support-questions
c
Hey guys! I'm facing a problem with keeping SuperTokens's process up. I have a remote EC2 instance on AWS, I run "sudo supertokens start", and it works for a while. After some time, the process stops. I tried to create a service:
Copy code
[Unit]
Description=Supertokens
After=network.target

[Service]
Type=simple
User=root
ExecStart=/usr/bin/bash -c "sudo /usr/bin/supertokens start"
ExecStop=/usr/bin/bash -c "sudo /usr/bin/supertokens stop"

[Install]
WantedBy=multi-user.target
But the situation is worse: the service stops immediately, and if i put the
Copy code
Restart: 'always'
option, it basically restarts SuperTokens endlessly *(consuming tons of CPU) *. I'm not using Docker!
r
Hey!
What’s the output of with —foreground?
Does it start successfully at first?
There are error logs too
Can I see the logs?
c
Yes, with --foreground the service does actually start and work. I don't know how long does it lasts (it usually stops working the next day), so I'll let you know tomorrow if something happens. (I've used this guide https://gist.github.com/CuriousCI/8106c780a4a95f835cba2ead599839df to create the service)
Now that I think about it, it would be cool, for the no-docker supertokens-core to include a service... Someone would just need to run sudo service supertokens start, or sudo service supertokens restart and it would work like any other service (like Nginx, PostgreSQL, Docker etc...)
About the logs, I'll have to check tomorrow...
I think it's a memory problem: by checking
dmesg
I get some
Out of memory: kill process {PID}
, but I haven't checked if the PID corresponds to the one SuperTokens gets... If it happens again, I'll let you know (I'm trying to use less memory with
--with-space
)
r
hmm. There have been no such issues that we have seen when running the core ourselves. And we have been running quite a few without restarting it for > 1 year even,
c
It might also be a problem with the EC2 AWS instance I'm using, which is very limited in memory and CPU
r
hmm. I have run the core using t2.micro as well - for a very long time
works fine on it too
c
It might be the fact that I'm running the DB and the python backend too... It happened again right now, and I'm checking the errors
r
hmm okay.
c
Ok, the Java process is the problem... the PID of the process killed is the same as the one SuperTokens got, so yes... that might be one of the problems... the other one is yum, but it's not related to SuperTokens...
Copy code
$ dmesg
...
[12835829.721863] Out of memory: Kill process 11750 (java) score 210 or sacrifice child
[12835829.728337] Killed process 11750 (java) total-vm:2122848kB, anon-rss:105756kB, file-rss:0kB, shmem-rss:0kB
...
[12835832.563089] Out of memory: Kill process 4917 (yum) score 439 or sacrifice child
[12835832.568976] Killed process 4917 (yum) total-vm:536324kB, anon-rss:221104kB, file-rss:0kB, shmem-rss:0kB
Copy code
$ cd /usr/lib/supertokens/.start
$ cat 0.0.0.0-3567

11750
And I used
sudo supertokens start --with-space=50
to start the process..
r
Hmm i see. 50 mb might be a little too less
im checking how much RAM it takes on high load
c
The logs give some information... something happened this morning at 3 AM... let me get you the files
r
it takes a min of 180 MB and on normal load, can go upto 400 MB of RAM. So i think you might wanna give it 300 MB just to be safe
(high load being ~50k monthly active users for one core)
c
Something strange happens at 3 AM today according to the logs
In case it might be useful
r
error.log is empty?
c
I think the problems isn't the process not having enough memory, but the system killing the process because it consumes too much memory
I just checked, yes
I cleaned before testing the problem a few days ago to check if anything is generated in the error.log file, but nothing happens there
r
hmm
c
I'll try adding some swap memory
r
let me check as well about the core's behavior with 50 mb limit
c
The strange thing is that the process behaves normally for 2 days, then it decides to explode
r
that is strange indeed. Is there a spike in requests? Which version of the core are you using?
c
The metrics don't show any spike in requests, I'm using core 3.12.1
r
hmm. that should not have any memory leak issue. Are you using argon2 hashing by any chance?
c
Yeah, I use 282 MB by default without SuperTokens running, it is a problem of machine's memory
I use default values for hashing
r
so no argon2?
c
No argon2
r
I see.
c
I'll check if there are any problems with swap memory enabled, and I'll let you know
r
ok thanks! i'll test on our side as well.
c
Ah, another thing: when the system stops the process, I have to manually delete the file in the
.start
folder, or the SuperTokens CLI will give some problems.
r
ah yea. that;s expected
Which operating system are you using?
c
Amazon Linux 2
I don't have problems with SuperTokens crashing anymore. It was an insufficient memory problem.
r
Ok great!
2 Views