Hacker News new | past | comments | ask | show | jobs | submit login
Kei – A small and expressive dependent language (github.com/caotic123)
67 points by caotic on Aug 25, 2019 | hide | past | favorite | 15 comments



An unusual name. Is it a reference to the Kei River in South Africa’s Eastern Cape? Sorry - bit of a random question.


"The core of Key is based on a ....."

So either it was called key, and Kei is a homophone, or visa versa, thus the mispelling or 2 translations from a different language, or they really like Kei cars ?


Or autocorrect interfered


It's one of the readings of the Japanese character 軽, which means light (as in not heavy). That would be my guess, but I can't say for certain.


The same kei (軽, lightweight) as kei-car, maybe: https://en.m.wikipedia.org/wiki/Kei_car


Its Dutch it means 'stone' or 'rock'. Its often used as 'kei-goed' meaning very, very good. So in Dutch its actually great language name!


It's Kei (romaji) meaning light. I don't remember where I get that name, but I think I was listening to some random music.


Can someone read out to me how the Abstraction rule works?


> One of the most interesting properties of Kei is you can combine statics symbols with rewriting rules to create another logic system, like COC. In λΠ-calculus modulo the conversion of terms is available between β-reduction and Γ-Reduction, this means that a type can be changed through a type relation of a rewriting rule. Of course, if there is a well-typed substitution rule σ(x).

Where is the <1% of the world population who can translate this to human-speak, when you need them?


I tried looking into this, but this publication's abstract [1] is just as (un)helpful:

> The lambda-Pi-calculus Modulo is a variant of the lambda-calculus with dependent types where beta-conversion is extended with user-defined rewrite rules. It is an expressive logical framework and has been used to encode logics and type systems in a shallow way. Basic properties such as subject reduction or uniqueness of types do not hold in general in the lambda-Pi-calculus Modulo. However, they hold if the rewrite system generated by the rewrite rules together with beta-reduction is confluent. [...]

[1] https://arxiv.org/abs/1507.08055


This is standard language that is perfectly readable if you’re familiar with the involved concepts.


Since you are a member of the 1% who understands this, please help us mumbo dumbos.


tl;dr: Looks like they are trying to formally encode the notion of "rewrite rule" within type theory itself.

Also a mumbo dumbo here, but I might be able to say something helpful. This is a bit of long post as I am trying to provide some context.

A useful way to think of logic and type systems is simply as rewrite rules on arbitrary strings. In logic we call these strings "theorems" and in type theory we call them "types".

In particular, say we have the logical theorem "a /\ (a -> b)". In classical (propositional) logic this corresponds to "a implies b, and a is true", so we make the (obvious) deduction that "b" is true. In other words, we rewrite "a /\ (a -> b)" with the string "b".

Intuitively, we want to say the above rewrite means the same thing as "(a /\ (a -> b)) -> b". However, the former is a rewrite while the later is just a static string. Thus we are forced to distinguish between the formal "->" character and the rewrite process, despite these naively/intuitively both meaning inference.

A bit of terminology. Classically, we call strings "theorems" and when we rewrite one string for another, we call that a "deduction". So say we have a string "S" and rewrite it as "T", a common way of notating such a deduction is "S => T". Furthermore, the whole system of deductions is what we call our "metalogic" while the collection of theorems produced is our "formal logic".

Anyway, so we have to distinguish between "->" and "=>", which seems a bit unfortunate. There is this standard tool in classical logic called the Deduction Theorem which adds a rule that given "S => T" we can deduce "S -> T". However, the justification for this uses somewhat sophisticated math and, by necessity, it's an argument in some (meta)metalogic. There are other ways of handling this and metalogics that don't have the Deduction Theorem at all. I have even read that Quantum Logic doesn't have such a rule and that adding it in turns the whole system into classical logic!

Now, to finally get to the point, Kei looks like it uses a metalogic it's calling "λ-Calculus Modulo Theory" which ports the Deduction Theorem to type theory, or in particular, the dependently typed λ-calculus. They seem to be calling these new rules Γ-reduction. In other words, it attempts to make general rewrite rules, like β-reduction, first-class citizens by adding types that correspond to said rules.

If I'm understanding correctly, one attractive feature of this is that it "closes the loop," per se, allowing one to encode their λ-Calculus Modulo Theory within itself, which by extension, let's us encode any other λ-calculus as well. Ostensibly, this should allow for maximally compact and expressive types, since you don't have to hack simple metalogical concepts into a less powerful type system.

I only skimmed their paper, but in general one could use their system to model some badly-behaved (i.e. non-confluent) type theories; however, they seem to have a (meta)theorems showing that certain uses of the Γ-reduction give you well-behaved ones.


“Rewriting rules” refers to term-rewriting systems[1], which are sets of directed equations. An “undirected” equation is similar to what you see grade-school algebra class, e.g. y = ax + b; if you have the right pieces of data, you can solve the equation towards either side of the equals sign. A directed equation can only be solved in one direction. That’s important because it allows you to come up with a system of rules that, when recursively applied to a term, eventually terminate.

A “confluent” term rewriting system just means that the set of equations in that TRS has a property where, if there’s ever multiple applicable rules that can be applied, no matter which of those rules you choose to apply to a term, you eventually get to the same state (again, when the rules are applied repeatedly to a term).

Beta-reduction [2] is one of the core operations you can perform with lambda calculus? You can think of it as being equivalent to a function call; an alternate analogy is when a C compiler inlines a particular function call during compilation, it’s performing beta-reduction at that call site.

That doesn’t explain everything in the quoted text, but hopefully it helps make it a bit more understandable. If you really want to learn more about this area of CS, take a look at the book Types and Programming Languages (TAPL) [3].

[1] https://en.m.wikipedia.org/wiki/Rewriting

[2] https://en.m.wikipedia.org/wiki/Lambda_calculus#β-reduction

[3] https://www.cis.upenn.edu/~bcpierce/tapl/


I think @Profquail explained everything so well. So I only have to add some pieces of information about Kei. Kei rewriting rules are used to transform (well-typed) data into other (well-typed) data basically (in that case to avoid non-confluent system Kei allows only static symbols transformations), thus it enables Kei creates others logic systems. This matters because of two points. First, proofs construed in a different proof assistant can be export to Kei. Second, I have the expressiveness to lead with others type inference rules, although without the needless of learning a new language.

~I really need a welcome doc to peoples who aren't familiarized with those kinds of topics.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: