IRC logs for #openttd on OFTC at 2023-08-07
β΄ go to previous day
00:55:26 *** k-man has quit IRC (Ping timeout: 480 seconds)
00:55:26 *** k-man_ is now known as k-man
01:24:49 *** Eddi|zuHause has quit IRC ()
01:27:21 *** Eddi|zuHause has joined #openttd
02:04:28 *** Wormnest has quit IRC (Quit: Leaving)
03:03:46 *** debdog has quit IRC (Ping timeout: 480 seconds)
04:40:53 *** Kitrana1 has joined #openttd
04:44:49 *** Kitrana2 has joined #openttd
04:45:53 *** Kitrana3 has joined #openttd
04:47:26 *** Kitrana4 has joined #openttd
04:47:31 *** Kitrana has quit IRC (Ping timeout: 480 seconds)
04:49:01 *** Kitrana1 has quit IRC (Ping timeout: 480 seconds)
04:52:04 *** Kitrana has joined #openttd
04:52:55 *** Kitrana2 has quit IRC (Ping timeout: 480 seconds)
04:54:00 *** Kitrana3 has quit IRC (Ping timeout: 480 seconds)
04:58:26 *** Kitrana4 has quit IRC (Ping timeout: 480 seconds)
07:18:25 <Eddi|zuHause> not that i know of
07:39:06 <Bouke> What would it take to support fast forward in multiplayer? When the server progresses faster than any client, itβll cause desyncs. So would having back pressure from the clients help solve this? The slowest client would then determine the maximum fast forward speed. This might also be useful for non-fast forward and allow for slow clients to join network games: the simulation speed could be
07:39:06 <Bouke> lowered dynamically based on the needs of any client.
07:39:49 <peter1138> Why would a server running fast cause desyncs?
07:42:44 <Bouke> Thatβs what I gathered from previous discussions: the clients cannot keep up with the fast server, so the user on the slow client would act upon outdated world state.
07:42:56 <locosage> it's not desync, just disconnect
07:43:13 <locosage> limiting speed by slowest client is doable but probably only useful on a private server
07:48:39 <jorropo> peter1138: take "server running fast" and change it by "client running slow"
07:48:51 <peter1138> Why would a client running slow cause desyncs?
07:49:17 <locosage> you're just confusing the meaning of desync
07:49:20 <jorropo> it wont desync but you I don't think you could play, you will be behind the game state
07:49:20 <locosage> desync is a game bug
07:49:25 <jorropo> it's still in sync just in the past
07:49:42 <locosage> slow clients just drop connection
07:50:07 <jorropo> so what they propose is that the connection is up and the server slows the simulation speed so they can stay online
07:50:56 <jorropo> I guess there already is an admin command to do this ? And given you probably don't want this on a public server idk if it's usefull.
07:51:56 <locosage> there is no command in vanilla but if you use citymania patchpack as a server there is `cmgamespeed`
07:51:58 <Bouke> Well it would be useful as it allows to have fast forward on multiplayer.
07:52:13 <jorropo> Bouke: I'm not sure what you mean, it wont make anything faster
07:52:46 <jorropo> there must already be a buffer allowing a client that is transiently behind to catchup else the tick rate would need to be above the RTT.
07:53:07 <truebrain> Bouke: most of the things are now in place to pull this off. The only thing missing, is to revise how we do our lockstep. Currently the server sends every other frame a message to the client: you can proceed to frame N, and they all empty their command queue and get to frame N. There is no other communication. What we kinda need is some other mechanism where the server is also more aware how much
07:53:07 <truebrain> struggle a client has, and a more modern approach to do lockstepping
07:54:13 <truebrain> we also send a lot of packets per second currently; which is a bit troublesome with high latency connections
07:54:39 <truebrain> (TCP, so out-of-order is corrected on network level, which with high latency can make the latency even higher)
07:55:38 <jorropo> truebrain: If the io is synchronised with the main event loop you can use of a latency based cgc and block in the write call to the clients.
07:56:00 <truebrain> somewhat related, most of the network "client is slow" is based on ticks, instead of real-world time. That is another wish to address.
07:57:24 <truebrain> lastly, long term, I would really like that the line between singleplayer and multiplayer gets more blurry. Where you can just invite a friend to join you in your game, even if you started it as singleplayer. Fast-forward is one of the last obstacles there
07:59:16 <jorropo> oh I thought fast forward is network catch-up, it's the actual speed increase button ?
07:59:17 <truebrain> this all btw isn't all that trivial, due to some nuances all over the place. For example, you can define how much ticks a client can run forward per time the server tells it to. Normally this is +1, but it can be configured up to like +10
07:59:23 <truebrain> for fast-forward, that is really an issue
08:00:13 <truebrain> the idea behind being able to configure this, is that for high-latency connections, you don't want to send 30 packets a second
08:00:18 <truebrain> but like 3 might be more realistic
08:00:33 <truebrain> what we don't know, if this is actually used by any server ... 14.0 will tell us, but that is a bit away π
08:00:42 <jorropo> truebrain: 30 packets a second is nothing
08:01:18 <peter1138> Per client, per server...
08:01:20 <truebrain> hope this braindump helps a bit Bouke ; feel free to poke me if you have ideas or anything
08:01:54 <truebrain> jorropo: yeah, those kind of remarks .. always makes me wonder if you actually understand our domain before you wrote that down π
08:02:53 <jorropo> truebrain: Yeah sorry
08:03:19 <jorropo> I know networking but not openttd's details, I was drawing conclusions, let me be back when I have usefull things to say
08:03:41 <truebrain> π It is okay π But a packet per frame really is an issue on higher-latency connections
08:04:11 <truebrain> especially as we are lockstep, done via TCP. It gives terrible user experience
08:04:20 <truebrain> deterministic games are the worst!
08:04:52 *** NGC3982 has quit IRC (Ping timeout: 480 seconds)
08:05:07 <jorropo> whats lockstep here ? does that means the server can't run the next frame without having received the inputs of the clients ?
08:05:23 <truebrain> Kinda; the clients cannot run a frame till the server tells them to
08:05:32 <truebrain> and all clients will have received all commands to execute during those frames
08:05:33 <locosage> there isn't much you can do about it though, undoing tick in openttd is quite hard and increasing frame freq is just increasing lag
08:05:42 <truebrain> so their queue is filled first with what to do, then they are told to continue
08:06:05 <truebrain> that way, both client and server execute their commands at the exact same frame, in the exact same order, always
08:06:27 <truebrain> as OpenTTD doesn't know any sorts of local determinism, but is global deterministic, that is how it has to be done currently
08:06:58 <truebrain> for Fast Forward ofc, you should increase the time between those packets like a mofo, as otherwise you are flooding the network with thousands of request per second π
08:07:28 <truebrain> (time is measured in ticks here .. "time" is a stupid word to use, I should avoid it π )
08:07:32 <Bouke> Does the client confirm that it has seen commands/ran a frame?
08:07:41 <truebrain> I am not sure, I have to admit
08:07:46 <truebrain> there is some backtalk from client to server
08:07:49 <truebrain> just can't remember what
08:08:43 <truebrain> yeah, there are "ack" frames
08:08:50 <truebrain> those are on a lower frequency
08:09:11 <truebrain> ` /* Let the server know that we received this frame correctly. We do this only once per day, to save some bandwidth ;) */`
08:09:22 <truebrain> back in 2007 I already understood the solution was crap π
08:09:25 <Bouke> truebrain: I like this, Iβm mostly switching between single/coop throughout the same game. Sometimes just running the server without clients to allow easy joining, but then I lack fast forward.
08:09:58 <truebrain> it is why "invite-only" exists; so we can make this more blurry π
08:10:20 <Bouke> truebrain: Does that trigger βclient is slowβ, or is that related to tcp/sockets?
08:10:53 <truebrain> there btw is also a "server sync" frame, not to be confused with anything we talked about. This is to detect desyncs. It compares the sync seeds from time to time.
08:11:23 <truebrain> this is why desyncs are something totally different in OpenTTD; it runs separate from everything else
08:12:04 *** NGC3982 has joined #openttd
08:12:18 <truebrain> `NetworkCalculateLag` does the "lag" calculation
08:12:22 <truebrain> it is cute how silly it is π
08:12:49 <Bouke> So what happens if my slow client is a few frames behind, and tries to perform a command that is invalid according to the serverβs game state?
08:13:05 <truebrain> when your client is behind, you cannot perform actions π
08:13:10 <truebrain> it will catch up first, or tries to
08:13:26 <truebrain> but in case you do a command, it is locally validated, send as-is to the server
08:13:33 <truebrain> the server validates it again when he wants to execute it
08:13:41 <truebrain> and sends that back to you: execute it at this moment in time
08:14:00 <truebrain> so an example there: say client A and client B build a rail at the EXACT same tick
08:14:10 <truebrain> both requests travel to the server, the server queues client As request
08:14:15 <truebrain> and client Bs request is denied
08:14:20 <truebrain> both clients are told about the action A did
08:14:25 <truebrain> and B is just not happening
08:14:30 <Bouke> truebrain: It knows it is behind as it checks the network (command/frame) buffer?
08:15:39 <truebrain> Bouke: ah, no, I am wrong, you can execute commands while your client is lagging behind. So yeah, the scenario above applies
08:15:45 <truebrain> it happens even when clients don't lag, basically
08:15:58 <truebrain> but the server is in control; he tells the clients what happened, even their own commands
08:16:09 <truebrain> a client never executes his own commands without the server telling him so
08:17:30 <truebrain> This kinda causes your client to become somewhat inresposive when you are getting behind but when you did handle the network stack
08:17:56 <truebrain> that line basically says: run the GameLoop as QUICK as you can when you are falling behind
08:17:59 <truebrain> it is terrible design π
08:19:00 <truebrain> you will see in the code that there is a `frame_counter_server` and a `frame_counter_max` .. one says where you MUST be, the other says where you COULD be. New commands are always scheduled after the `frame_counter_max`
08:19:18 <truebrain> lagging clients can't keep up with `frame_counter_server`, which causes hanging clients and shit
08:19:29 <truebrain> did I mention "it is complicated"? π
08:22:39 <peter1138> "but all modern games predict"
08:23:11 <truebrain> Bouke: search the code for `incoming_queue` to get a bit of an idea how the queues connect together between client and server
08:24:19 <truebrain> I see I am slightly wrong in the above statement: the command to build a rail from A and B arrive on all clients. They all execute A first, and B will fail
08:24:39 <truebrain> but the server doesn't revalidate if it will actually work; it just distributes it to all clients, which will just fail to execute the latter π
08:26:16 <truebrain> (for clarity, I wrote this in 2007? 2009? Somewhere there .. so my memory isn't always as accurate as I would like it to be on the details π )
08:27:26 <peter1138> Server won't send B to the clients if it failed on the server, surely.
08:27:39 <jorropo> locosage: I would "just" have clients live in the past.
08:27:39 <jorropo> So the server sends a packet giving the ok tick number from time to time without any consideration about latency or anything.
08:27:39 <jorropo> The client run it's own frame pacer that have a slight bias to accelerate if the receive buffer starts to get a bit full.
08:27:39 <jorropo> If you "just" do that, if someone has a 3s one way latency behind the client will store 0~1 tick of information in it's buffer and the pipe will store ~3s of information that will trickle at the tick rate.
08:27:39 <jorropo> The main issue with that design is state modifying inputs, the easiest solution is to always have the client first send to the server the input and have the server broadcast the inputs in order.
08:27:41 <jorropo> That means in practice I would except a TCP stream that looks like this (binary data to be compact):
08:27:41 <jorropo> | Client | very slow internet | very slow internet | more very slow internet | Server |
08:27:43 <jorropo> | Placed rail at 42, 42 β β β β |
08:27:43 <jorropo> | β β β β Player 3 placed road at 45, 45, T123 |
08:27:45 <jorropo> | Client | very slow internet | very slow internet | more very slow internet | Server |
08:27:45 <jorropo> | β Rail at 42, 42 β β β |
08:27:46 <truebrain> I don't see any evidence the server validates the packet before distributing it again peter1138 . But it might be hiding somewhere
08:27:47 <jorropo> | β β β P3 road at 45,45, T123 β Player 5 placed rail at 5,5, T124 |
08:27:47 <jorropo> | Client | very slow internet | very slow internet | more very slow internet | Server |
08:27:49 <jorropo> | β β Rail at 42, 42 β β |
08:27:49 <jorropo> | β β P3 road 45,45, T123 β P5 rail 5,5, T124 β T125 |
08:27:51 <jorropo> | Client | very slow internet | very slow internet | more very slow internet | Server |
08:27:51 <jorropo> | β β β Rail at 42, 42 β |
08:27:53 <jorropo> | β P3 road 45,45, T123 β P5 rail 5,5, T124 β T125 β T126 |
08:27:53 <jorropo> | Client | very slow internet | very slow internet | more very slow internet | Server |
08:27:55 <jorropo> | β β β β Rail at 42, 42 (received) |
08:27:55 <jorropo> | P3 road 45,45, T123 β P5 rail 5,5, T124 β T125 β T126 β P0 Rail at 42, 42, T127 |
08:27:57 <jorropo> In that sense you are using the latency in routers or photons going through intercontinental fibers as your storage for ticks in flight.
08:27:57 <jorropo> This would work very good on reliable connections, realistically two people with fiber internet, wired ethernet (no wifi).
08:27:59 <jorropo> Having many packets per second is *fine* as long as you pace them and or are resistent to packet loss. A wifi router will completely choke if you send 30 packets in a few Β΅s, but not if you equaly spread them over a second.
08:27:59 <jorropo> - The main drawback is that currently this design will have a big lag spike on each packet loss because you can't process next ticks without receiving the information for the current one first. This can be fixed by adding a time buffer to the client that is slightly above the RTT. So when a packet is lost you have time to ask for retransmission before your local simulation would block.
08:28:01 <jorropo> - The second less bad drawback is that when you do an action that modify the simulation you don't see the result until it has spent a round trip first, maybe two.
08:28:01 <jorropo> - A drawback that I think is completely ok who cares is that different players sees more or less outdated information based on their latency.
08:28:22 <truebrain> Best comment out of the whole network stack π
08:28:29 <peter1138> That... was a wall of text.
08:28:47 <jorropo> peter1138: the ascii diagram makes it look way worst than it is
08:29:21 <jorropo> The anoying issue with optimizing for wifi is that you can do many MiB/s if you pace the packets correctly, but it's not your own stream that maters, so your pretty perfectly paced out packets can still overflow the cheap router buffer if it's already full by a youtube video that is downloading a video chunk or smth.
08:29:32 <truebrain> jorropo: best to read up how we deal with multiplayer first π It helps in the conversation π
08:30:19 <peter1138> Despite what Discord might want you to believe, it's primarily for chat messages, not walls of text.
08:30:57 <peter1138> Everything you wrote will be lost in a few hours.
08:31:32 <truebrain> yeah, when a command is received, it does validate if it is a valid comment (so not a server command, are you not a spectator, etc etc)
08:31:44 <truebrain> but after that, it is put on distribution to all other clients at a frame allocated by the server
08:31:54 <truebrain> so the server doesn't actually validate the command will succeed or not
08:32:01 <truebrain> in this context, the server is its own client too btw
08:32:21 <peter1138> It executes the command itself, I always assumed the result of that determined whether it then got passed to clients.
08:32:33 <truebrain> no, and I guess it kinda makes sense
08:32:37 <truebrain> the server itself didn't enter the frame yet too
08:32:48 <truebrain> it tells the clients at the same time as the server to progress to the next frame
08:32:53 <truebrain> so everything is in lockstep
08:33:15 <peter1138> I have... run out of coffee.
08:33:29 <truebrain> I think the idea behind this was, so servers don't have an unfair advantage
08:33:32 <Eddi|zuHause> well, since you know both client and server run the same command on the same tick, it's guaranteed the result will be the same
08:34:04 <Eddi|zuHause> whether that result is success or failure doesn't matter
08:34:43 <truebrain> (as if you don't do it like this, clients are always slightly behind the server, giving the server the ability to always "win" .. which is only an issue if you have higher `frame_freq`)
08:35:22 <truebrain> anyway, back in 2007 we did experiment with it, and it is really rare that a client could locally test a command successful, but it would fail on the server because of an action another client took
08:35:36 <truebrain> so it is actually fine, I guess
08:36:00 <peter1138> Fair. Commands are (mostly) small.
08:36:02 <truebrain> it would also, in theory, mean the server doesn't actually have to track the game-state π
08:36:16 <peter1138> Except for sending the map...
08:36:27 <truebrain> torrent that from other clients! π
08:36:29 <Eddi|zuHause> the only job of the server is to put the commands in an order. not test the validity
08:36:37 <truebrain> it is mostly the reason there is very little overhead of being the server
08:36:56 <truebrain> it just routes packets π
08:36:56 <peter1138> Okay, so... run the network stuff in a separate thread.
08:37:06 <peter1138> Then OpenTTD is fully multi-threaded π
08:37:27 <truebrain> there is some threading going on, just not sure how far π It is from before threading was possible, and when most computers only had a single core π
08:37:31 <peter1138> The server can get behind itself π
08:38:25 <truebrain> yesterday I had a thought how to allow passwords to be stored in savegames, which is so stupidly simple, that I can't believe I didn't think of it before. Which most likely means I am forgetting something and it is impossible π
08:38:30 <alfagamma_0007> peter1138: What
08:40:58 <truebrain> hmm, in fact, the code already partially does what I had in mind, just not everywhere, and not consistent ... we are funny sometimes π
08:41:17 <LordAro> not consistent? OTTD? I am shocked
08:41:33 <truebrain> so we hash the company password before transmitting
08:41:38 <truebrain> but we don't hash the server password
08:42:44 <truebrain> the company salt is slightly wrong, but the idea is there
08:42:55 <truebrain> and the md5sum is calculated ... weirdly, for a salted password
08:43:26 <truebrain> so it seems we just need to clean up this stuff a bit, and then you could just save the hashes in the savegame, I think
08:45:09 <Eddi|zuHause> not sure if you shold use md5 for password hashes
08:45:41 <truebrain> MD5 really is okay for our goal; ideally we upgrade it to a sha256, sure, but there isn't an essential problem here. As long as you use salts, it is fine
08:45:42 <peter1138> The issue is something like, do you want the password hashed in transit, or hashed in storage.
08:46:10 <peter1138> If it's hashed in transit then you (usually) need a cleartext form on the server to validate it.
08:46:51 <peter1138> So that the server can hash the stored password and get the same result.
08:47:02 <truebrain> but if you also store the hashed password
08:47:08 <truebrain> you don't need to see the plaintext ever on the server π
08:47:12 <Eddi|zuHause> or it could just compare it to the stored hash?
08:48:51 <peter1138> If you're just comparing the stored hash with what the client sends, then the stored hash becomes the password, it's no longer a challenge protocol. This works ok when the data channel is encrypted.
08:49:22 <truebrain> "encrypted" ..the data channel has to be trusted, but yes
08:49:58 <truebrain> (I mention this nuance, as I am being pedantic π )
08:51:21 <truebrain> basically what you mention, is that you could do a replay attack when there isn't a challenge
08:51:34 <truebrain> but storing the plaintext password on both sides isn't really a good way to do such challenge
08:51:38 <truebrain> there are better ways these days π
08:51:50 <peter1138> These days, yes, it mostly revolves around "just use TLS"
08:51:50 <truebrain> but more the question: does OpenTTD need any of that?
08:52:05 <peter1138> SASL existed for a long time to solve the non-TLS
08:52:16 <truebrain> I was more thinking about stuff like DH
08:52:31 <Eddi|zuHause> DH is like what? 50 years old?
08:53:35 <truebrain> but if we look at OpenTTD .. we want a client to set a password, and when he repeats that password, it should unlock his company. From a server perspective, we don't actually care what that password is or represents. What we want to avoid, is that we store plaintext passwords in savegames, as that would be ... shit? π
08:53:50 <truebrain> so if our clients salt+hash the password client side, where the salt is given by the server
08:53:59 <truebrain> and depends on something that is not inside the savegame
08:54:16 <truebrain> the client can set a password that is nearly impossible to find back on the server (or on the wire), what the input was
08:54:33 <truebrain> so we can store that in the savegame
08:54:54 <truebrain> we already have a salt that is not inside savegames: `network_id`
08:55:02 <peter1138> Salt+Hash is replayable, basically.
08:55:10 <truebrain> so a savegame without the `secrets.cfg` is useless to anyone
08:55:21 <truebrain> yes, you can still have replay attacks; but that is not for the password part to solve
08:55:38 <truebrain> that is to say, without encrypting our datachannel, that is impossible to solve π
08:55:47 <truebrain> well, not impossible .. just difficult
08:56:09 <truebrain> but storing in a savegame yes/no, is not depending whether things can be replayed yes/no π
08:56:17 <truebrain> currently the game password is sent plainttext .. also pretty replayable π
08:56:39 <truebrain> so basically we identified two problems here, I guess π
08:57:15 <truebrain> to solve it properly, we would need to go password-less, and have a proper challenge method
08:57:42 <truebrain> but should that hold us back from storing the current stuff in savegames? (honest question)
09:01:31 <truebrain> worded differently I guess: the salt+hash is not to secure the data channel. It is to avoid servers knowing the plaintext. As with most software that uses salt+hash.
09:02:13 <_jgr_> I've done some tinkering in this area lately
09:02:36 <truebrain> did you fix the weird way we salt passwords? π
09:02:39 <truebrain> it is really odd π
09:03:10 <_jgr_> Partly, I'm doing different things for different types of passwords
09:03:12 <truebrain> peter1138: with libcurl addition we also get an encryption library, so in theory we could just encrypt our datachannel too now π
09:03:23 <truebrain> _jgr_: got a link to your work?
09:03:53 <Bouke> peter1138: SRP is a method to fix this. The server wouldnβt need to keep a plaintext version, nor the plaintext transmitted for verification.
09:05:27 <_jgr_> truebrain: I haven't really written any documentation beyond a one-liner yet
09:05:45 <_jgr_> I can find some commit IDs though
09:06:03 <truebrain> curious how you approached the problem for sure π So yeah, or just a general pointer
09:07:22 <truebrain> you want the pub/private way π
09:07:50 <_jgr_> For rcon, settings and game passwords, I made the password part of the DH key exchange
09:08:19 <_jgr_> Which conveniently allows encrypting/authenticating the rcon message and response
09:08:22 <truebrain> that makes storing it in a savegame really tricky, not?
09:08:33 <_jgr_> There's no need to store those passwords in a savegame
09:08:50 <_jgr_> For company password, I just store them in an encrypted chunk for server saves
09:09:04 <truebrain> so you save them? π
09:09:29 <_jgr_> Yes, otherwise there's no way to restart the server without creating a griefing problem
09:10:26 <truebrain> I have so many questions, but it is easier to read the code π
09:10:39 <Bouke> truebrain: Thanks, will have a look some other time. Sadly my CLion trial expired, so will see what I can do about that.
09:11:13 <truebrain> anyway, I see two things that we should address: securing the data channel (encryption, authentication, whatever), and allowing for storing the company passwords in the savegame
09:12:16 <truebrain> why did you went for monocypter _jgr_ ?
09:14:36 <_jgr_> It's small, simple, portable, and the API seems reasonable. I was already aware of it.
09:14:49 <truebrain> seems better suited than libsodium
09:14:50 <_jgr_> I didn't want to create a platform portability headache for myself
09:14:58 <truebrain> no, that is always the main issue here
09:15:04 <Bouke> truebrain: Have a look at SRP, it allows for password verification over unencrypted channels and storing βhashed passwordsβ. In this scheme, the password is never exposed to the server.
09:15:18 <truebrain> you mentioned it earlier, yes
09:16:35 <truebrain> but see, a salt+hash is fine if the datachannel has anti-replay attacks. So I guess it depends how deep this rabbithole should be π
09:19:40 <jorropo> Bouke: I've red your link, it would work here, the only tricky part is that you would either need to hardcode a salt or have the server transmit it on first connection.
09:19:40 <jorropo> The tricky part is that it does not specify how you send the original salt + password when you set the password and if you just sent it as-is cleartext over the wire it would be replayable. Which sounds fine.
09:20:33 <Bouke> Well if salt+hash is leaked, it would allow a modified client to authenticate without knowing the password. So it really depends on the security requirements.
09:21:33 <truebrain> the main requirement is: a server-owner should not be able to see passwords of clients π (we succeed in that, btw)
09:22:14 <_jgr_> Arguably MD5 is getting a bit wobbly for that these days
09:22:29 <truebrain> still, a plaintext collission is really difficult
09:22:39 <truebrain> but yeah, ideally we replace those with a bit more modern solutions π
09:22:45 <truebrain> but here too, cross-platform solutions are hard to come by
09:24:56 <truebrain> honestly, if we have some kind of crypto library, we don't actually need company passwords
09:25:21 <truebrain> an ACL is much more useful, in that case .. but .. yeah .. work π
09:27:11 <jorropo> FWIW you could copy a generic plain C or C++ implementation of Sha256. Performance wouldn't be awesome, probably in the one or two digits MiB/s range but you are speaking about hashing passwords from time to time.
09:27:28 <locosage> truebrain: game itself says otherwise π
09:27:48 <truebrain> jorropo: I would really prefer to prevent copying. Having a library means we don't have the maintenance burden π
09:27:54 <truebrain> locosage: I looked through the code; it is a lie.
09:28:13 <truebrain> I am also not sure what actually happened there .. someone went through the motions of adding client-side hashing + salt
09:28:21 <truebrain> but didn't add storing it in the savegame
09:28:25 <truebrain> which is a bit puzzling
09:29:15 <Bouke> jorropo: When the client sets the password, it generates the salt and computes a verification key from the password, then it sends to the server: salt + verification key. On verification, the server sends the challenge: public key + userβs salt.
09:30:22 <truebrain> introduced in 2007, the salt+hash .. lol
09:30:24 <jorropo> truebrain: Statically link the lib and turning off all non pure C or C++ ?
09:30:47 <truebrain> _jgr_: yes, based on the idea that the server handled it plaintext
09:31:38 <truebrain> it is also why we added the warning, I think
09:31:55 <truebrain> but .. yeah .. seems there was some disconnect there somewhere π
09:32:38 <truebrain> rather puzzling, honestly
09:32:47 <truebrain> especially as it is salted .. so servers can't even do a rainbow attack ..
09:33:05 <truebrain> finding the input back for it would require an unreasonable amount of time ..
09:33:21 <truebrain> guess we all kept eachother in the loop of "it is sent plaintext", without anyone actually checking? I dunno .. it is odd
09:33:52 <truebrain> the game-password is plaintext, which is just bad π
09:34:20 <_jgr_> I think that the rcon password being sent in the clear is the most egregious one
09:34:29 <_jgr_> You can almost take over a server with that
09:34:33 <jorropo> truebrain: MD5 is really really fast to hash on modern hardware.
09:34:33 <jorropo> If you don't use a correct horse batter staple kind of password just bruteforcing it is reasonable.
09:35:00 <locosage> _jgr_: stiil need to get in between admin and server somehow though
09:35:26 <_jgr_> In practice a lot of players use trivial passwords and are DMing them to each other whenever there is a traffic jam, etc
09:36:06 <truebrain> the more I read into how the current network stuff is, the more I get weirded out; maybe not focus on that, and more look into what JGR did π
09:36:18 <jorropo> It's a low stakes train game after all. I don't have personal info stored in my company π
09:36:33 <truebrain> you would be shocked how many people didn't get that memo
09:37:45 <truebrain> _jgr_: that randombytes library, also solid?
09:37:56 <locosage> I almost want to do some dict attack on public servers just out of curiosity xD
09:39:24 <_jgr_> truebrain: It's just some thin wrappers around OS random functions
09:39:40 <truebrain> at least it is not using the wrong functions on Linux π
09:43:52 <truebrain> I still like DH .. it is so weird yet so powerful π
09:44:08 <truebrain> (reading your implementation _jgr_ ; it is pretty self explaining)
09:46:56 <truebrain> _jgr_: just to understand your mindsend, you encrypt/authenticate rcon messages, but not any others; is your intention to also bring that to game packets?
09:46:57 <dwfreed> truebrain: the problem with DH by itself is you do not know if you're being MITMed; you'd need some way for the client to be able to validate that the DH public key it's received is actually from the server
09:47:23 <truebrain> yup; authentication π
09:48:01 <_jgr_> truebrain: At that point it'd make more sense to encrypt/authenticate the whole transport layer
09:48:16 <truebrain> yeah, exactly; so I was more wondering what made you pick rcon over the rest?
09:48:30 <truebrain> because it is the most evil attack vector?
09:48:43 <truebrain> (there is no wrong answer, to be clear; just curious)
09:49:28 <_jgr_> I was thinking from the point of view of protecting the server
09:49:51 <_jgr_> What users do on the server is not really a valuable secret in that sense
09:50:04 <truebrain> and the game protocol should prevent abuse to start with π
09:50:34 <_jgr_> Going full TLS would mean certificates, which are an enormous pain the rear which I'd rather not deal with in my free time π
09:51:01 <truebrain> guess the main issue with bringing the rcon stuff to vanilla, is that we have to explain it really well for 3rdparties π
09:51:50 <_jgr_> If it gets added to vanilla, any forks could just merge it as is, surely?
09:52:12 <_jgr_> Are there any of those?
09:52:28 <truebrain> at least 2 I know of π
09:52:29 <_jgr_> Usually 3rd party tools use the admin port instead
09:52:52 <truebrain> I have no clue how popular they are π
09:53:18 <truebrain> oof, admin protocol .. I forgot about that one .. how is the state of that
09:53:47 <_jgr_> Ideally it should never leave the box the server is running on
09:53:59 <truebrain> try to explain that to people π
09:54:27 <_jgr_> The protocol itself is oddly designed, but it works
09:54:30 <truebrain> admin password is sent plaintext, it seems π
09:54:57 <truebrain> similar to game password, implied trust between user and server
09:56:17 <truebrain> the admin protocol allows full rcon control too, ofc
10:55:53 <truebrain> I guess the main question I still have: do we want to use a challenge for company passwords, or are we okay with "you know the (salt+hash of the) password". Yes, the latter could leak, but as our data channel is not authenticated anyway, does that matter? (open question)
10:56:06 <truebrain> adding authentication is .... a lot of work, but we have the infra for it, so it can be done π
12:14:52 <peter1138> If we can do it, we should do it?
12:26:45 <truebrain> it is a lot of work π Does it add value? I honestly don't know π
12:29:59 <truebrain> it also would make libcurl a required dependency for non-windows
12:53:41 <andythenorth> More swiss grf use
13:38:09 <talltyler> I dunno, those sailboats look relatively sinkable
13:41:53 <peter1138> The scale is all wrong
14:23:15 *** sinas128 has joined #openttd
14:23:15 <sinas128> andythenorth: You should more than 1,5 tiles on that service
15:14:18 *** Wormnest has joined #openttd
16:36:55 *** Smedles has joined #openttd
17:17:44 <andythenorth> If we all lived in Switzerland weβd make the OpenTTD ploppables experience (objects) better.
17:17:59 <andythenorth> Towns here are like computer games
17:31:02 <andythenorth> Objects with an optional info window & text callback?
17:31:57 <andythenorth> Also objects with behaviours? (Act as station tile, act as industry tile, act as house)
17:51:10 <andythenorth> βWe could change themβ
17:52:23 <andythenorth> Dunno, am playing a lot of Tropico currently π Lot of ploppable building placement π
17:53:42 <_glx_> though object usable like non traversible station tiles could be nice
18:00:24 <andythenorth> Or station tiles that can be built on corner slopes
18:01:13 <andythenorth> Also it would be nice to have tiles with game effects, e.g station rating, town rating, loading speed etc
18:01:32 <andythenorth> Some of that maybe GS can do
18:02:39 *** gelignite has joined #openttd
18:02:58 <andythenorth> I donβt play many other map-based building games, but when I do, I find the world building elements of OpenTTD are quite under-exploited π
18:03:32 <andythenorth> I know itβs a train game, but eh π
18:15:55 <FLHerne> _glx_: objects usable as station tiles, and deprecate the weird station layout stuff? :p
18:22:29 <talltyler> I am planning to PR additions to object spec for consumption and production of cargo (like houses, not industries) but other things are ahead in line π
18:38:59 <DorpsGek> - Update: Translations from eints (by translators)
18:50:20 *** amal[m] has joined #openttd
18:50:31 *** andythenorth[m] has joined #openttd
18:50:40 *** calbasi[m]1 has joined #openttd
18:50:51 *** audunm[m] has joined #openttd
18:51:01 *** Bilb[m] has joined #openttd
18:51:11 *** blikjeham[m] has joined #openttd
18:51:21 *** citronbleuv[m] has joined #openttd
18:51:32 *** cjmonagle[m] has joined #openttd
18:51:42 *** CornsMcGowan[m] has joined #openttd
18:51:51 *** einar[m] has joined #openttd
18:52:00 *** elliot[m] has joined #openttd
18:52:11 *** EmeraldSnorlax[m] has joined #openttd
18:52:21 *** emilyd[m] has joined #openttd
18:52:30 *** fiddeldibu[m] has joined #openttd
18:52:50 *** freu[m] has joined #openttd
18:53:02 *** giords[m] has joined #openttd
18:53:10 *** grag[m] has joined #openttd
18:53:21 *** gretel[m] has joined #openttd
18:53:31 *** hamstonkid[m] has joined #openttd
18:53:41 *** Heiki[m] has joined #openttd
18:53:50 *** pikaHeiki has joined #openttd
18:54:00 *** igor[m] has joined #openttd
18:54:10 *** imlostlmao[m] has joined #openttd
18:54:20 *** jact[m] has joined #openttd
18:54:30 *** jeeg[m] has joined #openttd
18:54:41 *** jeremy[m]1 has joined #openttd
18:54:52 *** joey[m]1 has joined #openttd
18:55:00 *** karl[m]12 has joined #openttd
18:55:10 *** karoline[m] has joined #openttd
18:55:20 *** kstar892[m] has joined #openttd
18:55:28 <peter1138> "objects usable as station tiles" are called station tiles.
18:55:31 *** leward[m] has joined #openttd
18:55:40 *** linda[m]1 has joined #openttd
18:55:50 *** luffy[m] has joined #openttd
18:56:01 *** luk3Z[m] has joined #openttd
18:56:11 *** magdalena[m] has joined #openttd
18:56:20 *** menelaos[m] has joined #openttd
18:56:31 *** NekomimiGunner18[m] has joined #openttd
18:56:41 *** nolep[m] has joined #openttd
18:56:50 *** osvaldo[m] has joined #openttd
18:57:00 *** patricia[m]1 has joined #openttd
18:57:10 *** patrick[m]12 has joined #openttd
18:57:20 *** paulus[m] has joined #openttd
18:57:30 *** phil[m] has joined #openttd
18:57:41 *** philip[m]123 has joined #openttd
18:57:50 *** playback2396[m] has joined #openttd
18:58:00 *** royills[m] has joined #openttd
18:58:12 *** rudolfs[m] has joined #openttd
18:58:22 *** shedidthedog[m] has joined #openttd
18:58:31 *** Farrokh[m] has joined #openttd
18:58:41 *** soylent_cow[m] has joined #openttd
18:58:51 *** temeo[m] has joined #openttd
18:59:01 *** thelonelyellipsis[m] has joined #openttd
18:59:10 *** thomas[m]1234567 has joined #openttd
18:59:20 *** tonyfinn has joined #openttd
18:59:31 *** Gadg8eer[m] has joined #openttd
18:59:50 *** JamesRoss[m] has joined #openttd
19:00:02 *** vista_narvas[m] has joined #openttd
19:00:10 *** VincentKadar[m]1234 has joined #openttd
19:00:20 *** Elysianthekitsunesheher[m] has joined #openttd
19:00:30 *** wormnest[m] has joined #openttd
19:00:41 *** YourOnlyOne has joined #openttd
19:00:50 *** yubvin[m] has joined #openttd
19:01:01 *** zzy2357[m] has joined #openttd
19:01:20 *** YourOnlyOne is now known as Guest8201
19:07:02 *** Flygon has quit IRC (Quit: A toaster's basically a soldering iron designed to toast bread)
20:19:37 *** Kitrana1 has joined #openttd
20:23:14 <locosage> is there really no function to get TileIndexDiff from two TileIndex?
20:24:46 *** Kitrana2 has joined #openttd
20:25:36 <locosage> feels weird to rely on integer overflow
20:26:35 *** Kitrana has quit IRC (Ping timeout: 480 seconds)
20:27:43 *** Kitrana has joined #openttd
20:28:07 <Rubidium> well, not directly but there is TileIndexToTileIndexDiffC and you can go from there to TileIndexDiff. Though essentially that'll just give you the same as TileIndex - TileIndex
20:28:28 <brickblock19280> make a function relaying on integer overflow
20:29:44 <Rubidium> *unless* you keep using the TileIndexDiffC and use AddTileIndexDiffCWrap?
20:30:04 *** yemtron has joined #openttd
20:30:40 *** Kitrana1 has quit IRC (Ping timeout: 480 seconds)
20:31:22 <locosage> I kinda need TileIndexDiff
20:32:07 <locosage> can even keep everything in TileIndex but that's a bit too hacky I guess
20:32:56 *** Kitrana2 has quit IRC (Ping timeout: 480 seconds)
20:36:41 <_jgr_> locosage: Tile indices are unsigned
21:03:57 <Eddi|zuHause> FLHerne: grf spec doesn't really have the concept of "deprecate"
21:13:57 *** nielsm has quit IRC (Ping timeout: 480 seconds)
21:41:02 *** keikoz has quit IRC (Ping timeout: 480 seconds)
21:52:12 <Eddi|zuHause> that's a necropost if i've ever seen one
21:53:45 *** gelignite has quit IRC (Quit: Stay safe!)
22:11:09 <_glx_> yeah and timidity is really the very old way, we use fluidsynth by default now
22:35:51 *** imlostlmao[m] has quit IRC ()
22:38:31 *** pikaHeiki has quit IRC (Quit: Client limit exceeded: 20000)
22:54:31 *** tonyfinn has quit IRC (Quit: Client limit exceeded: 20000)
23:51:23 *** tokai|noir has joined #openttd
23:51:24 *** ChanServ sets mode: +v tokai|noir
23:52:01 *** wallabra[m] has quit IRC ()
23:57:56 *** tokai has quit IRC (Ping timeout: 480 seconds)
continue to next day β΅