Jump to content

Recommended Posts

Posted
2 hours ago, waldemarnt said:

great work 👏 does it work with l2off too?

It will but you will lose the benefit of using the API to get things like proxy status. I'm sure some l2off dev will be able to implement it but I know nothing about l2off

  • 4 weeks later...
Posted (edited)

According to the graph you show, in the first case the connection is made from the vps to the main server, that is, the intermediary makes the request for you, waits, the answer and then gives it to you, then at the end the time in which the answer arrives at the client is the sum of both, although for the client the response ping that will be seen in the client will be between the client and the proxy if I am not mistaken, although it does not seem to be the real one, someone understands how the second case works, anyone can explain it ?

Edited by TGSLineage2
Posted
3 hours ago, TGSLineage2 said:

According to the graph you show, in the first case the connection is made from the vps to the main server, that is, the intermediary makes the request for you, waits, the answer and then gives it to you, then at the end the time in which the answer arrives at the client is the sum of both, although for the client the response ping that will be seen in the client will be between the client and the proxy if I am not mistaken, although it does not seem to be the real one, someone understands how the second case works, anyone can explain it ?

Those are completely separate connections. The diagram shows the logical flow of the login process. After you login the player is directly connected to the proxy which is directly connected to the server. A single player is always connected to the gameserver through a single proxy. The graph just shows that there might be multiple proxies to chose from.

Posted (edited)
27 minutes ago, xdem said:

sounds like a reverse proxy to me

It's a MITM proxy that implements reverse proxy functionality. It's not a reverse proxy by definition but you wouldn't be wrong for calling it a reverse proxy either.

Edited by Elfocrash
Posted

mitm is not a technical term as I am aware of.

It would be either a reverse proxy or a proxy, and since its server side this is would be a reverse proxy implementation for which I find no beneficial use for a l2 server rather than just making things more complicated.

The features list is already in the spectrum of what the LS/GS can already do, reverse proxies do not decrease latency as a matter of fact

Posted (edited)
14 minutes ago, xdem said:

mitm is not a technical term as I am aware of.

It would be either a reverse proxy or a proxy, and since its server side this is would be a reverse proxy implementation for which I find no beneficial use for a l2 server rather than just making things more complicated.

The features list is already in the spectrum of what the LS/GS can already do, reverse proxies do not decrease latency as a matter of fact

MITM is a term for intercepting proxies. It is an intercepting proxy acting as a reverse proxy.

 

Proxies decrease latency only when implemented on a provider's backbone network.

You are basically taking advantage of the cloud provider's dedicated networking to speed up indirectly.

 

Here is a diagram explaining it.

image.png.6e8d8b5bc1c5aff20ac02f0abe3460f4.png

 

 

It won't do miracles because the speed of light is the speed of light at the end of the day but it can make a noticable difference. You also get protection and control that the GS itself doesn't have to waste resources to deal with. You never have to expose the actual gameserver location and all your traffic protections can be applied on cheap VPS' rather than expensive dedicated servers. This approach has a lot of advantages. If you have enough proxies you can even shadow segment teh proxies that show up and if someone DDOSes the server they will just take down 1-10/20 proxies that hold just a few people instead of the whole server.

Edited by Elfocrash
Posted
2 minutes ago, Elfocrash said:

 

image.png.6e8d8b5bc1c5aff20ac02f0abe3460f4.png

 

 

It won't do miracles because the speed of light is the speed of light at the end of the day but it can make a noticable difference. You also get minor protection and control that the GS itself doesn't have to waste resources to deal wiht.

 

I don't want to be annoying or anything, but this diagram right here is just theory. There are no guarantee whatsoever that even if you use Azure's backbone network as the given example implies that you would get a flat 50% latency reduction, this graph looks extremely manipulative and false marketing to me.

Even if a possible 10% latency reduction was possible the above architecture is off-limits for the 99% of the live servers out there, making your share another educational usage project :)

Posted (edited)
9 minutes ago, xdem said:

I don't want to be annoying or anything, but this diagram right here is just theory. There are no guarantee whatsoever that even if you use Azure's backbone network as the given example implies that you would get a flat 50% latency reduction, this graph looks extremely manipulative and false marketing to me.

Even if a possible 10% latency reduction was possible the above architecture is off-limits for the 99% of the live servers out there, making your share another educational usage project 🙂

It's not theory at all. It might be new to you but it's actually how things work. Backbone networking is one of the selling features of cloud providers and I use them on a day to day basis. Some basic googling will answer all your questions. You can get anything from a 10% to a 60% reduction depending on the region and the status of the network.

 

Those are actual numbers. Feel free to create a free account on AWS or Azure and test it on your own. Also the project is used in multiple live servers and it has been running in its previous form on a few other servers since 2018. Nothing educational here and nothing to sell (so nothing to market). You can disagree all you want but at the end of the day it doesn't matter because it's just ignorance talking.

 

EDIT:

Here is an example of an AWS dedicated to this exact concept called AWS Global Accelerator with a dedicated section on how this benefits Gaming. I personally know that Blizzard is using it for games like WoW, Overwatch.

Edited by Elfocrash
Posted
2 minutes ago, Elfocrash said:

It's not theory at all. It might be new to you but it's actually how things work. Backbone networking is one of the selling features of cloud providers and I use them on a day to day basis. Some basic googling will answer all your questions. You can get anything from a 10% to a 60% reduction depending on the region and the status of the network.

 

Those are actual numbers. Feel free to create a free account on AWS or Azure and test it on your own. Also the project is used in multiple live servers and it has been running in its previous form on a few other servers since 2018. Nothing educational here and nothing to sell (so nothing to market). You can disagree all you want but at the end of the day it doesn't matter because it's just ignorance talking.

 

So I suppose you have already implemented your reverse proxy that proves the 50-60% flat latency reduction on servers on Azure SA where we can test ? If what you are saying is not just theory but also practical for L2 GameServers then this is a breakthough

Posted (edited)
2 minutes ago, xdem said:

 

So I suppose you have already implemented your reverse proxy that proves the 50-60% flat latency reduction on servers on Azure SA where we can test ? If what you are saying is not just theory but also practical for L2 GameServers then this is a breakthough

It's nothing new. You probably missed my edit but:

 

Here is an example of an AWS dedicated to this exact concept called AWS Global Accelerator with a dedicated section on how this benefits Gaming. I personally know that Blizzard is using it for games like WoW, Overwatch.

 

Basically every big company that does any sort of networking is using that concept and has been for years in both gaming and other general networking.

Edited by Elfocrash
Posted
3 minutes ago, Elfocrash said:

It's nothing new. You probably missed my edit but:

 

Here is an example of an AWS dedicated to this exact concept called AWS Global Accelerator with a dedicated section on how this benefits Gaming. I personally know that Blizzard is using it for games like WoW, Overwatch.

 

Basically every company that does any sort of networking is using that concept and has been for years in both gaming and other general networking.

These services are per traffic and do not come cheap, also you need lots of research and azure knowledge to implement it - still not being aware of the actual network boost you are going to get, and by your sayings your proxy is not torture tested in a live production environment with hundreds or even thousands of foreign connections on the proxy. Don't get me wrong but I am just thinking out loud my concerns regarding this project

Posted (edited)
9 minutes ago, xdem said:

These services are per traffic and do not come cheap, also you need lots of research and azure knowledge to implement it - still not being aware of the actual network boost you are going to get, and by your sayings your proxy is not torture tested in a live production environment with hundreds or even thousands of foreign connections on the proxy. Don't get me wrong but I am just thinking out loud my concerns regarding this project

Ignore the service itself. It's just an example of an implementation of the concept. You can implement it using proxies instead of using AWS' Edges (which is basically what AWS is also doing).

Like I said, spin up an Azure or AWS environment (wherever you have free credits) and test if for yourself. I tested it with the L2jBrasil folks when I originally created it and that's where my results come from.

 

Now about the "torture testing". I define torture test as 1 million concurrent connections with maybe 10m requests per second. Has it been torture tested this way? Nah, but it's perfectly stable with at least 500 L2 concurrent connections without any signs of degradation. Keep in mind that traffic is also segregated. It's perfectly stable and perfectly fine to use based on no compaints from at least 20 servers that I personally know that are using it.

 

And at the end of the day you don't even have to use the service itself. Simply gets a VPS and configure it as a proxy. You can still use the Java part. You just lose some features but gain all the benefits of the concept.

Edited by Elfocrash
  • Like 1
  • Upvote 1

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.




×
×
  • Create New...