Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Phoenix examples readme to docs #404

Merged
merged 2 commits into from
Oct 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions examples/phoenix/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Phoenix LiveView examples

This directory contains minimal, single-file LiveView applications, which showcase how to integrate Bumblebee as part of the application.
The [examples/phoenix](https://github.com/elixir-nx/bumblebee/tree/main/examples/phoenix) directory contains minimal, single-file LiveView applications, which showcase how to integrate Bumblebee as part of the application.

## Running

Expand Down Expand Up @@ -89,12 +89,12 @@ When working with user-given images, the most trivial approach would be to just

Both of these downsides can be avoided by moving all the work to the client. Specifically, when the user selects an image, we can resize it to a much smaller version and decode to pixel values right away. Both of these steps are fairly straightforward using the Canvas API.

For an example implementation of this technique see the [image classification example](image_classification.exs).
For an example implementation of this technique see the [image classification example](https://github.com/elixir-nx/bumblebee/tree/main/examples/phoenix/image_classification.exs).

### User audio

The points made about images above are relevant to user-given audio as well. In fact, decoding audio files on the server requires ffmpeg to be installed system-wide. However, we can do all preprocessing on the client and send raw PCM data with a single channel to the server.

For an example implementation of this technique see the [speech-to-text example](speech_to_text.exs).
For an example implementation of this technique see the [speech-to-text example](https://github.com/elixir-nx/bumblebee/tree/main/examples/phoenix/speech_to_text.exs).

If you are interested in real-time streaming, look at the [Membrane Framework](https://github.com/membraneframework/membrane_core).
3 changes: 2 additions & 1 deletion mix.exs
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,8 @@ defmodule Bumblebee.MixProject do
"notebooks/stable_diffusion.livemd",
"notebooks/llms.livemd",
"notebooks/llms_rag.livemd",
"notebooks/fine_tuning.livemd"
"notebooks/fine_tuning.livemd",
"examples/phoenix/README.md"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe new ExDoc versions (or main) support redirects, so we could have a link that points directly to the GitHub repo. Either way is fine by me, your call.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a redirect would be ideal, but I couldn't find such option.

],
extra_section: "GUIDES",
groups_for_modules: [
Expand Down
Loading