Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pikelet driver/loader API #175

Open
brendanzab opened this issue Nov 5, 2018 · 12 comments
Open

Pikelet driver/loader API #175

brendanzab opened this issue Nov 5, 2018 · 12 comments
Milestone

Comments

@brendanzab
Copy link
Member

brendanzab commented Nov 5, 2018

My brain is running round in circles trying to design this in a vacuum, so I thought I'd sketch out some high level thoughts on this stuff. There are a bunch of interlocking concerns, which makes it a little hard to figure out how to make any headway on it.

Currently our loader/driver API lives in the pikelet-driver, but it leaves a lot to be desired. Ultimately we want a Rust API that maintains some incrementally accumulated state, and has an API with functions that give a nice way to:

  • parse source code
  • type check ASTs
  • evaluate expressions
  • compile stuff
  • load primitive functions
  • query the current state
    • type at cursor position
    • jump to definition
    • complete at cursor
    • find all references
    • find all implementations
  • editor actions
    • rename symbol
    • autoformat
    • case split
    • move hole into binding
    • search for hole substitutions
    • inline definition
    • extract definition

The Pikelet loader API would probably be consumed by the following clients:

Import paths may be:

  • relative to the current file
  • global
    • from a built-in, eg. primitive functions
    • from a package dependency, or the standard library
    • from a dynamically loaded/compiled script

Paths need to be followed in topological order, forming a DAG. We will want to be able to listen to the file system for updates, and incrementally update as needed.

We probably want to avoid baking in a heavy compiler back-end (like LLVM) at this level, although I also wouldn't rule out including a JIT (like CraneLift) for evaluating expressions at compile time.

@brendanzab
Copy link
Member Author

Might be interesting to look at the Amethyst scripting API proposal for another perspective on what a client embedding Pikelet might want: amethyst/rfcs#1

@brendanzab
Copy link
Member Author

Datafrog has a nice approach to evaluation, in that it leaves stepping the runtime up client of the library. This could be handy for embedding.

@brendanzab
Copy link
Member Author

Also kind of interesting to have a look at the The Lua-C API for inspiration... 🤔

@PaulBone
Copy link

PaulBone commented Nov 9, 2018

So I havn't read up on this stuff all that much. But the kind of thing I'd imagine for such a loader API/language server interface is much simplier:

Assert new info:

  • Import pre-existing library (eg, load a .so/.dll/bytecode)
  • Know this declaration (like an old C compiler seeing a forward declaration)
  • Load/compile this definition (give it some code and maybe some other meta-info like filename & line number of the code, Ask it to compile)
  • 'forget" this symbol. it forgets the name->declaration/definition mappings but a GC may cause the declaration/definition to hang around. You can now load a new definition (maybe also declaration)
    gine.

A definition can be:

  • A type
  • A function
  • etc.

Query info:

  • Tell me about a symbol's type
  • Execute a function with some arguments
  • anything else you can ima

You can think of a tool supporting this API as calling parts of the compiler as a library call. The compiler itself may have a different interface (so it can do more optimisations / compile multiple definitions at once). Things like "typecheck this/compile this" are the know definition call. And things such as rename symbol and autoformat are not actually handled here, they're handled by whatever is making these calls such as an editor. Eg: rename symbol might do a forget then a know. Editors still work with text files which is the canonical version of the program, it just updates this online version to support editor functions/a repl, but can be thrown away at any time.

Maybe you could add +reload definition+ and handle that by also re-compiling anything transitively that referred to that symbol.

@brendanzab
Copy link
Member Author

Thanks for sharing your thoughts @PaulBone! Some nice food for thought!

@brendanzab
Copy link
Member Author

brendanzab commented Nov 10, 2018

jonathandturner/rhai looks like it has a nice API for this stuff!

One nice thing is how you can register functions:

extern crate rhai;
use rhai::{Engine, RegisterFn};

fn add(x: i64, y: i64) -> i64 {
    x + y
}

fn main() {
    let mut engine = Engine::new();

    engine.register_fn("add", add);

    if let Ok(result) = engine.eval::<i64>("add(40, 2)") {
       println!("Answer: {}", result);  // prints 42
    }
}

And types as well:

extern crate rhai;
use rhai::{Engine, RegisterFn};

#[derive(Clone)]
struct TestStruct {
    x: i64
}

impl TestStruct {
    fn update(&mut self) {
        self.x += 1000;
    }

    fn new() -> TestStruct {
        TestStruct { x: 1 }
    }
}

fn main() {
    let mut engine = Engine::new();

    engine.register_type::<TestStruct>();

    engine.register_fn("update", TestStruct::update);
    engine.register_fn("new_ts", TestStruct::new);

    if let Ok(result) = engine.eval::<TestStruct>("let x = new_ts(); x.update(); x") {
        println!("result: {}", result.x); // prints 1001
    }
}

This could help reduce the current mess we have in pikelet_elablorate::context.

@brendanzab
Copy link
Member Author

Gluon also has nice embnedding and marshalling APIs that might be worth drawing inspiration from.

@brendanzab
Copy link
Member Author

One issue that comes to mind is how closures might be handled, passing from Rust into Pikelet. Currently we only have an interpreter, so we can actually call closures during normalization. But eventually we'll want to have a JIT, and a code generator, so that might not be possible. This would most likely limit our ability to support things like rhai's Engine::register_fn API.

@brendanzab
Copy link
Member Author

Here's a nice list of embeddable languages that we might be able to get inspiration from (thanks @photex!).

I'm kind of feeling that you might have disjoint concerns when embedding with a JIT vs compiling to native code -in the former case you want to register Rust types and data/closures with the VM, and in the latter there you might want to statically/dynamically link to a Rust or a C library. It's kind of tricky to support both. 🤔

@brendanzab
Copy link
Member Author

brendanzab commented Nov 15, 2018

Chatted to the peeps on the Cranelift Gitter, and got some nice responses! @sunfishcode says:

For passing closures into JIT'd code, the first version of this will look like: at the machine code level, you pass a pointer to the function in, and call it indirectly, passing in pointers to its data. Pretty low-tech to start with. But there are people working on building Cranelift-based Rust backends, which should open up more options in the future.

They might be able to do some work to make this easier though, which would be neat!

@brendanzab brendanzab mentioned this issue Nov 19, 2018
4 tasks
@Marwes
Copy link

Marwes commented Dec 3, 2018

Thought I'd point out https://github.com/gluon-lang/gluon/blob/master/src/compiler_pipeline.rs . It is not perfect by any stretch but it has worked out quite well in gluon.

High level overview is that It defines a trait for each compile step.

MacroExpandable
Renameable
MetadataExtractable
InfixReparseable
...

Each trait takes Self as input and outputs it's own type on success MacroValue, Renamed, ... then I add two implementations for the trait, one on the previous steps output (impl Renamed for MacroValue and one as a blanket implementation on the previous trait Renameable for T where T: MacroExpandable which just calls the previous step (yielding a MacroValue in this example) and then calls the current step on that output.

All in all, this makes it quite easy to run only the compile steps up to step X so a formatter may only need to parse, while a language server needs to run up to typechecking but no further). It also makes it possible to inject logic between steps and then continue the compilation without worrying about a step being omitted.

@brendanzab
Copy link
Member Author

Oh nice! I like this! This is cool too:

pub type SalvageResult<T> = Result<T, (Option<T>, Error)>;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants