Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This repo doesn't have enough stars #3

Open
jeff-hykin opened this issue Jan 17, 2020 · 4 comments
Open

This repo doesn't have enough stars #3

jeff-hykin opened this issue Jan 17, 2020 · 4 comments

Comments

@jeff-hykin
Copy link

jeff-hykin commented Jan 17, 2020

I'd say this project needs about 1,000 - 10,000 more github stars.

This is an absolutely fantastic project, unlike anything I've seen in web dev 👏👏👏, a true demonstration of the untapped power of the web. Not only that, but great writeup, the gifs, blog post, and explaining the details of what you had to do and who you had to talk to in order to get it working. I can't believe how responsive the terminal is WITHOUT wasm. We're going to use this at my University to teach linux to students (who mostly have Windows laptops).

In terms of a browser file system, over the years I've been incredibly interested in a client-side package manager. Instead of compiling S.P.A. websites into one giant monolithic file, the site just requests the package+version of npm libraries they need. The client would either already have a local copy and load it instantly or download it one time (instead of downloading an entire duplicate library for every site and every site-reload). You could potentially even pre-load library variables while waiting on the unique code of the website. This could make so many sites blazing fast even on low bandwidth. I've always wondered why Google hasn't built something like this into Chrome.

I'm also interested how much work it would take to connect the linux VM to the network through the javascript fetch API so that commands like wget would work. That would be an incredible step forward, but I imagine it isn't easy since it probably doesn't fit within Plan 9.

[Obviously you can close this issue]

@humphd
Copy link
Owner

humphd commented Jan 17, 2020

@jeff-hykin thanks so much for filing this issue. Obviously I'm not going to close it :) It's great to read comments like this, and I'm glad you found a use for it. I built it because I was sure it was possible to make it work, but hadn't seen it done. However, I didn't have a use case as such. I also teach CS to undergrads, so if it gets used to help teach Linux, that's wonderful.

Regarding networking, it's possible to do this, and the underlying "hardware" I used (v86) has docs on how to make it work, see https://github.com/copy/v86/blob/master/docs/networking.md. It uses a WebSocket and proxy server to allow connections out onto the network. I didn't do this because I wanted to run everything statically, without any server. If you did want to go this route, the VM I built would need to be reconfigured and rebuilt to support the Linux networking stack. I stripped it out to save size.

Let me know how it goes doing things with this. I'd be interested to hear more as you play with it.

@martin12333
Copy link

(By the way, I created 2 related subreddits:

@dawnofman
Copy link

I'd say this project needs about 1,000 - 10,000 more github stars.

This is an absolutely fantastic project, unlike anything I've seen in web dev clapclapclap, a true demonstration of the untapped power of the web. Not only that, but great writeup, the gifs, blog post, and explaining the details of what you had to do and who you had to talk to in order to get it working. I can't believe how responsive the terminal is WITHOUT wasm. We're going to use this at my University to teach linux to students (who mostly have Windows laptops).

In terms of a browser file system, over the years I've been incredibly interested in a client-side package manager. Instead of compiling S.P.A. websites into one giant monolithic file, the site just requests the package+version of npm libraries they need. The client would either already have a local copy and load it instantly or download it one time (instead of downloading an entire duplicate library for every site and every site-reload). You could potentially even pre-load library variables while waiting on the unique code of the website. This could make so many sites blazing fast even on low bandwidth. I've always wondered why Google hasn't built something like this into Chrome.

I'm also interested how much work it would take to connect the linux VM to the network through the javascript fetch API so that commands like wget would work. That would be an incredible step forward, but I imagine it isn't easy since it probably doesn't fit within Plan 9.

[Obviously you can close this issue]

Did you really find a way to implement network?

@SahidMiller
Copy link

SahidMiller commented Aug 13, 2021

In terms of a browser file system, over the years I've been incredibly interested in a client-side package manager.

@jeff-hykin You might be interested in VirtualFS to get Yarn and Webpack working in the browser. VirtualFS/Filer can hold the files temporarily in memory or db, enough to build a package.

I recently achieved a simple test building a test file in Webpack and downloading a package using Yarn and it only took a one line change in both packages and none in VirtualFS IIRC.

Trying this out and succeeding actually led me to this project, God bless, so I'm glad you brought client side package management up!

Instead of compiling S.P.A. websites into one giant monolithic file, the site just requests the package+version of npm libraries they need. The client would either already have a local copy and load it instantly or download it one time (instead of downloading an entire duplicate library for every site and every site-reload).

Coincidentally, I used go-ipfs on the backend and js-ipfs on the client side to get around CORS and download packages. The meat of the feature is explained here which was used to fetch the packages over js-libp2p.

These libraries are actually perfect for deduplication and distribution of packages AFTER downloading them from the official site... using the same networking tools for both!

I'm curious about mounting IPFS to Linux VM - or even better, having functional tools to compose multiple arbitrary fs-like objects or something.

(BTW, if you're familiar with IPFS. By polyfilling 'fs' module, you can use the cli version in the browser with xtermjs, which is very convenient, in general, and specifically working with this browser Linux)

You could potentially even pre-load library variables while waiting on the unique code of the website. This could make so many sites blazing fast even on low bandwidth. I've always wondered why Google hasn't built something like this into Chrome.

Webpack Module federation might be a great choice for building SPAs that run on low bandwidth and it works great with IPFS (even the creator of module federation mentioned this).

It basically combines lazy loading modules with the power of webpack dependency negotiation. So if the "container" or "calling app" already has certain modules that are compatible with what the import needs, then they won't be refetched... and if they clash, they'll be isolated from each other!

If you look at the examples, you'll see it's normally combined with routing, so each URL is kind of like a microfrontend with it's own packages, a "container". Then when the user navigates to another page, this conainer might have some compatible packages and might not, so it merges with the child app. This goes on as long as the user keeps navigating to different pages and whatnot. And doesn't matter which url or "container" they started with.

At least that's my understanding of it.

I'm curious if we can use this plugin working easily in the browser "version" of webpack!

I'm also interested how much work it would take to connect the linux VM to the network through the javascript fetch API so that commands like wget would work. That would be an incredible step forward, but I imagine it isn't easy since it probably doesn't fit within Plan 9.

The library I used to get yarn to download packages via js-ipfs is called network-stackify. It wraps js-libp2p into a net-like socket and polyfills http, https, tls, using that libp2p connection. I've successfully used http, https, even websockets on top of libp2p in the browser, polyfilling arbitrary applications that assume those native modules are there. Even full servers like hapi, God bless! (Literally hosted IPFS Gateway in the browser using this)

I actually think most of that work might be extremely useful for this project since we could use SSH OUTSIDE of linux via SSH2, and use that one connection to multiplex all the internal linux network calls. so linux -> ssh2 via libp2p -> server -> internet perhaps one day each libp2p peer can beits own network interface?

I think the only thing that prevented this before was not having a decent http polyfill over websockets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants