Just For Fun

Dec 25, 2023

SSTP VPN & SoftEther Server

It is extremely challenging to play AOE4 online in China due to the current network conditions. Additionally, Microsoft doesn't have the policy space to operate game servers in China. Although the proxy solution for my home network allows TCP protocol Apps to work flawlessly, AOE4 requires the use of UDP protocol for its networking to function properly.

After carefully examining the available solutions, there are three potential options to tackle this issue. The first one involves using a commercial "game accelerator". The second option is utilizing "sstap" 1, despite it no longer receiving updates. Lastly,the third solution entails enabling a VPN. Personally, I intend to opt for the third choice and work towards establishing my own VPN server.

Now, the question arises: which VPN protocol should I select? ChatGPT has provided me with some recommendations: PPTP, L2TP/IPsec, OpenVpn, SSTP. Among these protocols, SSTP appears to be a suitable option for my specific requirement of a Windows-only environment. During my search, I came across two open-source SSTP VPN servers on GitHub. The first one is Python-based2, but it lacks proper support for native Windows SSTP VPN clients. However, I then discovered the SoftEtherVPN3 project, which is truly remarkable. If you only want a instantly tutorial, there is one on reddit4. The key step of the tutorial is generate a X.509 certificate that you can install on your system to verify the SSL link between your server and the client operation system.

Originating from Japan, this project has the backing of a Japanese university. Perhaps this is why it possesses a unique and meticulously designed flavor in terms of its interactivity and configuration patterns. Such characteristics are quite uncommon in the present landscape of open-source development.

What a coincidence! Currently, as I am learning to play new Civilizations in the AOE4 game, I happen to be focusing on Japan, a civilization that was recently added through the latest AOE4 DLC called "The Sultans Ascend". In the game, Japan boasts a plethora of uniquely designed technologies that I genuinely enjoy exploring just like the way I enjoy exploring the SoftEther project.


  1. https://www.sockscap64.com/sstap-%E4%BA%AB%E5%8F%97%E6%B8%B8%E6%88%8F-%E4%BD%BF%E7%94%A8sstap/ 

  2. https://github.com/sorz/sstp-server 

  3. https://github.com/SoftEtherVPN/SoftEtherVPN 

  4. https://www.reddit.com/r/VPN/comments/o5i05r/setting_up_sstp_vpn_server_on_linux/ 

posted at 20:00  ·   ·  blog  tech

Nov 28, 2023

At the end of 2311

At the end of the month, I feel I need to write down some achievements of this month. My ai-client app underwent some significant upgrades. For example, the app now has the ability to run entirely locally and features local whisper server based ASR for typing into every other window. After changing the underlying model (from llama2 to OpenOrca), the speed of LLM has improved considerably.

I have added some DIY extensions to my Emacs editor. They are my very first Emacs Lisp programming experience. The functions meet my expectations. However, after reviewing the codes two weeks later, I realized they could work, but lack maintainability and basic organization. I have decided to code using Emacs Lisp only after studying some better projects.

Nonetheless, I have admitted that AI+Editor is an upcoming irreversible trend. Summary; sentences explanation; grammar and spelling checker ; natural AI based TTS read aloud function will be indispensable features of next-generation mainstream editors. Maybe I should write an article in Emacs China online forum, letting other Emacs users take advantage of high extensibility of Emacs with local LLM server before the mainstream.

Another smaller niche area of my geek hobby is "how to better use an E-ink reader in the AI era." Thanks to the dpt-rp1-py1 and pdfannots2 projects, and my local ai-client and Emacs editor, I could turn the almost zero interactive ability plain digital paper device into a much more intelligent laptop reading enhancing device. If finding any article needing focus, I will copy it in to Emacs and use a single Emacs DIY command to sync it into the e-paper device. When reading on the e-paper device, simply mark any unclear sentences or words and type one command on laptop. The explanation will then appear on the device closely following.

All of these geek DIY hobby give me a sense of long absence freedom when faced with computer and digital devices. I feel like not just a consumer, but also a hacker in doing these things. The feeling comes into my body when I found Linux and became passionate about it in high school 20+ years ago. I'm so joyous when it reappears.


  1. https://github.com/janten/dpt-rp1-py 

  2. https://github.com/0xabu/pdfannots 

posted at 20:00  ·   ·  blog  tech

Nov 23, 2023

All 3(ASR/LLM/TTS) in Local !

Last night, I was very excited because I managed to set up my TTS server( piper1 based) locally. It took me a whole day to make it work perfectly, using an unmerged patch2 to improve its performance on CUDA environment. I also plugged it into my custom-built terminal chatbot client and Emacs editor. To ensure that the server as a restless deamon servicea, I wrote a systemd service unit description file for it.

After finishing the task, my home lab server reached a state worth noting. All of ASR, LLM, and TTS are running locally without any cloud API or internet access needed. Moreover, all three services are accelerated by CUDA, which means they perform better than any cloud service.

I believe the state is a crucial infrastructure that is not given enough recognition, which led to the impulse to write about it.

Nowadays, computing power and chips are hot topics. Everyone discusses them. Yes, today's new AI/graphics chips are so powerful and versatile. However, tech companies only put this power in the cloud, which is a highly controllable environment. If you aren't a gamer, you likely won't feel the need for a dedicated graphics or AI card.

In the coming years, the situation could change. ChatGPTs are showing people how incredible AI capabilities can be. Most users access AI through its website or mobile app. Although people say it's amazingļ¼Œthis access method places a limitation on its full potential.

In a name of AI safety, I have doubts about these leading AI companies being willing to release the full potential to people.

So, that make a anticipated vacuum space.

When technology is unstable, people are less inclined to use it. This is particularly true when it involves expectance of natural feedback. Because when we talk with a person, where we naturally expect quick responses, so when talking is the approach we use technology,the expectance also comes into play. In contrast when writing or typing the tolerance is much higher.

So locally running high-performance ASR/LLM/TSS system will be a game changer for "Talking/Language/Speech as main UI" concept gain success.

Another scenario that can be used as an analogy is cloud gaming. Even pushed by companies like Google,"Cloud gaming" was failed too. Because when people play games,The tolerance for delay or pause is much lesser than searching, shoping, writing, coding. when gaming we mobilize our nature instinct, the same applies when we talking.

The potential of the technology is immense. Currently, I can talk with a robot in speech for language learning purposes and summon basic Emacs editor commands. However, it's still just a foundation. Since it runs locally, privacy concerns are lessened. Additionally, this allows the LLM to access my local data more easily. The robot(scripts that wrappe local LLM) could access my exercise data, financial information, daily routines. This creates a truly personalized personal assistant experience beyond the most advanced ChatGPTs in cloud.

I even think it will be a developing foundation. ASR and TTS service natually come with simple API. LLM is a little bit complex, But when comes to its core input/output layer , it's just do text completion. Therefore, The combine of ASR/LLM/TTS can become a standard of multiple vendors.

Latest follow-up:

After my feedback, the extremely long inference time when using CUDA fix has been merged.


  1. https://github.com/rhasspy/piper 

  2. https://github.com/rhasspy/piper/pull/172 

posted at 20:00  ·   ·  blog  tech

Nov 14, 2023

Start a Blog site again

First, we need to choose a static blog generator. There are two options: 1.weblorg 2. Pelican.

weblorg

Weblorg is a beautifully written Emacs Lisp project. Its built-in file format is Org-Mode, a first-class format in the Emacs world that hasn't been conquered by Markdown. If Org-Mode is my primary writing format, then Weblorg would undoubtedly be my choice.

Pelican

Pelican is a modern Python static site generator. In the vibrant world of Python, there are over a dozen static site generators available. This project has thrived since its inception and continues to evolve, showcasing a clear and mature design philosophy1. It supports both Markdown and reStructuredText, a popular content writing format that predates the widespread adoption of Markdown.

Conclusion

I've decided to start with Pelican due to its built-in Markdown support. If the blog continues to grow and I gain more proficiency in Emacs Lisp, migrating to Weblorg might make the site sexy a bit little.


  1. https://docs.getpelican.com/en/latest/internals.html 

posted at 21:30  ·   ·  blog  web