Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Church vs. Curry Types (ericnormand.me)
105 points by todsacerdoti on Dec 18, 2023 | hide | past | favorite | 45 comments


I don’t think this is necessarily a binary distinction. Even in a statically-typed language, so ostensibly with Church types, you can imagine a refined type system being added as type annotations. For example, many statically-typed languages do not support range constraints on their built-in numeric types, and such constraints could be added as type annotations, analogous to type annotations in a dynamically-typed language. A real-life example of this are linters with linter annotations in comments.

So instead of a dichotomy it is really a lattice of possible type refinements. And the top of the lattice corresponds to the single-type view of dynamically-typed languages. The concrete language chooses a particular point in the lattice as its universally enforced typing level. (Or sometimes just by-default enforced, if there are escapes like “unsafe”.) But the choice is among all points in the lattice, not just two points.

As a side node, it is etymologically amusing that Haskell doesn’t have Curry types.


> I don’t think this is necessarily a binary distinction. Even in a statically-typed language, so ostensibly with Church types, you can imagine a refined type system being added as type annotations.

Considering author provides LiquidHaskell as example, I'd imagine they'd agree with this sentiment

> As a side node, it is etymologically amusing that Haskell doesn’t have Curry types.

If I understand correctly, LiquidHaskell corrects that :)


I don’t think the article’s characterization of ‘types a la Curry’/extrinsic types is the commonly used understanding in discussions about Type Theories. While the article claims the extrinsic types are ‘not part of the semantics of the language’, I can not agree. My experience in the area leads to extrinsic types which are described better in the following way: extrinsic types are those types which may be ascribed to a syntactic construct in a formal language. In other words, while the types are not present as annotations in the formal syntax of language terms, the fact that every term is of some type is built into the semantics of the formal type theory. One of the best descriptions and formal treatments of Curry-style types in in ‘Functors are Type Refinement Systems’ by Melliès and Zeilberger [1]. That paper is discussed in detail and the formal definitions and properties of Church vs Curry types are explored in Zeilbergers OPLSS lectures on Type Refinement.

1 - http://noamz.org/papers/funts.pdf

2 - Zeilberger at OPLSS in 2016, 4 lectures


I can't tell if there's anything here I should be trying to learn. Best I can tell it's choosing obtuse naming for Static and Dynamic typing.

> Summary: Static vs dynamic typing debates often flounder because the debators see from two different perspectives without knowing it. Learning to identify the two perspectives can calm the discussion. The tension between the two perspectives has led to Gradual Typing and other technologies.

I'm not buying this. The best pragmatic answer for myself is that static types is better in large or across teams. Dynamic typing is good for solo or early development by a small team. Gradual typing is the best you can get when a product developed with dynamic typing grows to be developed by many.


It's comparing static typing as exists in Haskell or OCaml to static typing as exists in C++ or Typescript. To dynamically type a program in the first group, you would have to manually make a giant type that includes everything and use that everywhere. In the second, you can program dynamically by turning the safety off.


Are you sure you meant C++ in the second group? In Typescript you can program dynamically, perhaps. In C++, you'd have to cast everything brutally, which I don't think most would call programming dynamically, or do what you propose for OCaml.


Casting at all really solidly puts it in that group. The idea that there’s a program and a type system and you tell the type system when it’s diverged from the program doesn’t really work in the other group of languages.


(Complete rewrite) I think this may be true in theory but not a practical way to think about c++, as the language fights you if you try to use in that way.


> I can't tell if there's anything here I should be trying to learn. Best I can tell it's choosing obtuse naming for Static and Dynamic typing.

No not at all, both are static. Church style typing means all the terms are annotated with their types. In practice, languages like Haskell have some type inference, so the compiler figures out some of the annotations for you (or even all of them, in easy cases) instead of making you annotate everything, but inside the compiler every term is marked with a type (whether supplied by you or inferred by the compiler).

In Curry style typing, there are no annotations at all, and the type inference algorithm has to figure everything out itself. In the most general case, this problem is Turing-complete, i.e. undecideable. Part of the challenge in projects like Liquid Haskell is to automate as much inference as possible without running up against this barrier.


The way the article describes Curry style typing seems like dynamic/optional/gradual types to me.

> Curry-style types is when a system of types is applied that is not part of the semantics of the language. This is what Clojure has in core.typed. The meaning of a Clojure program is not dependent on it passing the type checker---it can be run without it. The type checker simply alerts you to type errors in your code. Note that you could consider your type checker to be your own head, which, as flawed as it may be, is what most Clojure programmers use. The types could be anywhere outside of the language.

I'm not familiar with Clojure/core.typed--how is it different than TypeScript (or JavaScript with the type checker in your head)? It seems that if the type check fails it does so statically, but often it doesn't fail statically but rather at runtime. I can see that is a distinction but understanding this difference doesn't offer much practical value.


I agree with your sentiment and resonate with some of my opinions on languages (been reading about TT and abstract math in general for a while as a hobby)

my opinions so far (subject to change as i read more):

dynamic typing = just one type [0]. yes, good to test ideas quick, to develop quick gigs that i don't care about and tiny snippets/scripts, i use js for that.

gradual typing = guaranteed mess evolving large code bases, I guess the gradual typing thing in practice never happen and when it happens your IDE becomes red and your compiler errors explode, i don't recall where i read about that but it made sense to me, i used groovy to take advantage of the optional typing but i was the only dev in that project.

sound type system + parametric polymorphism = the best thing around, although i prefer to have explicit type annotations.

no formal math object behind a language = the language does not exist, just , absurd rules ,idioms and syntactic mess that you have to remember and tied to specific monkey patched implementations to compensate incoherent foundations, sadly unavoidable, as most, if not all popular languages do not exist,.

Yes, I've been reading Harper a lot.

[0] https://existentialtype.wordpress.com/2011/03/19/dynamic-lan...

edit: formatting


The ability to reason about code, to split it into modules that can be developed independently etc. is not dependent on having a static type system. It is dependent on having a specification. "Types" is just one form of, in many languages quite limited, specs.


> "Types" is just one form of, in many languages quite limited, specs.

That's the "Curry types" perspective described in the article, yes. "Church types" are syntax, not specification. In Haskell, for example

    main = print (5 + "foo")
isn't a program that fails to conform to its spec, it's just not a program. It has no meaning and no behavior, not wrong behavior. You can (but should never) `unsafeCoerce` `5` to `String` or `"foo"` to `Int` and get something broken out - but they'll be different broken things, and neither will in any sense be the compiled form of the `main` function above.


I'm so lost.

(I'm firmly in the static-typing camp, coming from an internal philosophy of "computers are supposed to do things as fast as possible, and not make mistakes".)

> Intrinsic types are great because you are guaranteed to have a mathematically-sound safety net at all times.

Sounds great, even if I take it with the usual grain of salt.

> Extrinsic types are useful because you can apply multiple type systems to your code

If I don't get the mathematically-sound safety net, why would I apply 1 type system to my code, let alone n?


This is actually a rare instance of the grain of salt being unnecessary. Depending on the type system, it is possible for it to guarantee that the only source of error be in your specification and not in your implementation.


> If I don't get the mathematically-sound safety net, why would I apply 1 type system to my code, let alone n?

I don't it is intended to imply that extrisic type systems wouldn't be sound.

As for why you'd want several type systems, one benefit I imagine is that different type systems could verify different aspects of your program, instead of forcing to cram everything into one type system.

Afaik just because type system is sound doesn't alone mean that it can be used practically to prove every interesting property of a program.


This lets different parts of your code use different type systems, so to speak, by picking and choosing which annotations you specify. Another real-life example is the “unsafe” mode some statically-typed languages have, which effectively switches to a laxer type system.


> Another real-life example is the “unsafe” mode some statically-typed languages have, which effectively switches to a laxer type system.

Are you referring to Rust here? Because I believe unsafe there doesn't change type checking or type system in any way


“Unsafe” disables borrow checking in Rust, which is part of the type system. C# also has a (different) “unsafe”.


Conceptually speaking they are quite close: both let you write code C-style with raw pointers within unsafe blocks. Similar to Rust, C# also has a subset of Unsafe and Marshal APIs which explicitly indicate their risk for bypassing compiler or language checks when either you know what you are doing or when whatever goal they achieve is not expressible with safe code.


unsafe does not disable the borrow checker, it allows calling functions that could not be otherwise written in code that is OK'd by the borrow checker.


This is a great distinction! With typescript getting wider use, I've found myself frustrated by the limitations of bolt on type systems. Knowing how deep the divide is from languages with fully inferrable types gives me more appreciation for how much work went into the current state of inferencing tools.


I found TypeScript to be _incredible_ once you use a runtime validation library like Zod or io-ts.

TypeScript itself has a great type system, but the culture in TS is to use `any` at the boundaries of your app, e.g. in parsing HTTP requests, local storage, URL routes -- any data outside of the type system. This greatly reduces how much a type system can help you.

Zod, io-ts, and other similar library, allow you to enforce type constraints at runtime in a very sleek, easy way, and they don't require you to write your types twice, either.

I've found that this makes TypeScript feel almost as safe as languages with much better type systems, e.g. Haskell.


The Typescript team has been pushing the use of `unknown` instead of `any` for typing data at the boundaries of your app; how much of that has shown up in the ecosystem?

EDIT: I definitely agree with your general point, for the record.


unknown definitely helps, but it doesn't solve the problem. You still need to get the data from unknown -> your type. Most people will resort to casting, which is unsafe. Runtime type libraries let you know _for sure_ that the type is correct.

example: https://tinyurl.com/yc74stm6

I adopted this pattern at work for all of our API calls, local storage, route parameters, etc.. The result was the app was fully typed, and we handled every case of invalid data in a reasonable way. When data is invalid, we can provide a sane default value.


Definitely agreed that it's not a perfect solution, just that it's better than blindly using `any`. Using type assertions to get around it defeats the point, though, and it is frustrating when people do that.


I don’t think the two pictures are isomorphic. The word “isomorphic” means that it is a bijection that preserves structure from one set to another, but I don’t really see that here beyond a superficial aesthetic similarity.

One thing that is isomorphic is mathematical proofs and well-typed computer programs. Without the “well typed” part, you are left with a proof that may or may not be correct and that may or may not make sense.


It's interesting to note that extrinsic typing is not limited to retrospectively-designed typecheckers. One of the big philosophical differences between Haskell and OCaml is that Haskell permits (encourages) type-directed compilation, while OCaml's type annotations don't affect the program semantics (although they affect whether the program compiles). But both are far away from modern-day retrofitted gradual type systems.


To make this a bit more concrete, imagine you were passing the identity function into another function or context, but then using it both with an Int term, and a Bool term. Thus by inference that identity function has the type Int->Int, and Bool->Bool. Under Curry, this is ok, but under Church, it is not.

I don't think it has much to do with static vs dynamic, or indeed type labels.


I didn't find this post very illuminating. It needs some actual examples of typed code and how the different interpretations would apply. The linked Wikipedia article actually did have an example, and it helped me understand a bit better, but it was in terms of the lambda calculus, so I'm still not entirely seeing how this applies to static and dynamic type system debate.

Also: (2018)


Somewhat related playing in the space: https://www.youtube.com/watch?v=3U3lV5VPmOU


It's wonderful to see an article about this topic that's grounded in rigorous thought. And that references the long, rich history of folks who have worked hard to articulate the problems and solutions one encounters when programming.

From my experience and training, I'm biased towards the Church-style of thought for representing all programs. My reasoning is simple: ultimately, we write programs to solve a problem. The implementation of the program is a secondary concern to producing a correct solution. An incorrect solution is just about as useful as a program that immediately exits after starting. [1]

Using the type-oriented perspective to programming shifts the paradigm from *how* a computer program works to *what* it is accomplishing. When we're sure we have the "what" correct, we can then perform transformations on our program to optimize its implementation to achieve different second-order properties on the "how." Do we need this to run faster? Do we need this to use less memory? Do we need this to re-use a pool of threads or network connections? Do we need this to use SIMD instructions because that's what our hardware supports? The possibilities for "how" can be seemingly endless: they're always use-case specific.

However, when we're thinking about solutions, we necessarily must be focused on achieving a singular goal: the problem is solved. When we use types to represent our program semantics, we gain an awesome level of problem-solving ability. We can ensure that our program semantics really do solve our problem. We don't need to wait for an actual execution of our program to determine whether or not we've solved it. In a laymen's sense of the term, we gain an exponential speed-up in our ability to make correct programs. It makes insanely complex problems solvable. And it makes the simple ones trivial.

We stop wasting our time focusing on re-solving every detail that needs to go into the execution of our program. Instead, we are able to build upon foundations that _we know work_. Could we get to where we are today if we had to to write every new program in hardware-specific assembly? Implement every piece of functionality going down to the level of "how will my processor be able to execute this?" Of course not! We've realized this in many cases: when we use compilers, we transform a solution defined in one space to an equivalent solution defined in another; we implement our web app endpoints using the types and semantics of WSGI; we make our complex distributed systems on the back of infrastructure with well-documented components that specify the types of things that they support and what they provide. We're surrounded by examples of successful large systems that rely explicitly on defining types.

Ultimately, it's my strongly held belief that for the software industry to mature to genuine engineering, it's imperative for our collective thought to embrace mathematical rigor. This is not to gatekeeper others: everyone can learn. We must believe that we can understand this well enough to teach absolutely _anyone_ who is willing to put in the time and effort to understand.

If we're serious, we need to graduate from the notion that we're only writing programs to we're coding up solutions to our problems. Where we define and reason about the problem at a conceptual, abstract level and then move towards reusing and combining solutions from Computer Science.

--

[1] And it may be _less useful_ than that: consider a program that produces _ever so slightly wrong_ results. Is it feasible to always verify that its output is always correct _each time that it is run_? If one needs to do that work, then is it actually automated? If it's not automated, why put in the effort to program a machine to perform the work?


Documentation of data flow and rigorous testing might be an alternative and superior way to get correct answers, at least in some contexts. Just because a type has been assigned to some data does not mean that is necessarily correct. All input needs to be checked and can have various mistakes, errors, and distortions. Type systems are one method that may help to keep things in check, especially with complex business logic maintained by large and dynamic teams. But if the business logic is well defined and the teams are small then typing can be a digression that takes time and effort without bringing about superior results.


The way I look at it is that "types" are assertions, which tell us whether our assumptions about the program are correct or not. There is no such thing as a "correct program". There is only our correct or incorrect understanding of what the program is doing. Type-declarations help us gain the understanding about that.

Types express data, which allow the assertions to be structured and composite, and express assertions about the real data running through the program when it executes.

So "types" in a sense do not exist as such, we are simply writing "predicates" in some formal language. Whether those predicates are checked at compile-time or run-time is a separate question.

The reason many people assume that 'types' are a thing is that when using a "statically typed" language you are required to enter them always, and therefore they take on more or less existence of their own.

Using a typed programming language you are always writing TWO programs, one that does what you want it to do, and another that observes and verifies the code's adherence to your assumptions about it. Thus types are a secondary utility, they are a harness that limits what your program can do. Types help you to better understand and to gain trust in your understanding of your program.


I agree that documentation is important. There's several methods by which we can communicate program semantics. I firmly believe that it's important to do this. For instance, there can be bugs in how the types are specified that don't align to what one wants. This is where written technical documentation or a specification can be incredibly useful.

I'd like to hone in on the specific example you bring up to push back against the value of strongly and statically typed programs:

> All input needs to be checked and can have various mistakes, errors, and distortions.

This is something that should always be 100% represented in the type system. Input validation is something where the value of typing is probably at its best (in my opinion, at least). Input validation is a function that accepts data of some "raw" type and then produces a value of the "clean" type -- or an error. This is `raw --> (clean | error)`. If you're calling this kind of function, you will always need to include logic for handling both possibilities. Not doing so is a mistake.

Think of a compiler: it accepts many text files (or strings thereof) and produces either a binary program or a list of errors.

> But if the business logic is well defined and the teams are small then typing can be a digression that takes time and effort without bringing about superior results.

Some programming languages have rather useless static type systems, where they will rely on exceptions. Perhaps this is the experience that you're drawing on? I would like to make minimal assumptions here, but I acknowledge that I hear this kind of line of thinking a lot when I talk to folks that have have only used statically typed languages like Java. I'll pick on Java for a bit, but there are more languages with weak static typing that this applies to.

Java's type system is a real shame. It's very verbose and it doesn't do much useful work beyond very simple things. A major reason for this is because of the language designer's decision to make extensive use of exceptions. Even checked exceptions are a pain to deal with: they're not regular values, forcing programmers to use completely different syntax to deal with them. These poor ergonomics caused people to reach for unchecked exceptions. (Unchecked == every single exception that can be raised in Python, if that comparison helps.)

Unchecked exceptions may provide runtime safety, but at the expense of making code verbose while also making it not actually handle the problem. It's a concept that needs to be used sparingly. Array index out of bounds? Sure. Your input validation failed? Terrible. Much of programming -- in the large or small -- needs to deal with the possibility of an action failing. It's a mistake to write code that can perform a failing operation without checking to see if it does fail.

> digression that takes time and effort without bringing about superior results.

I'd also like to focus on this part a bit. There are many good programming languages that have type inference. At some point, one always has to write the types of something. It may be obvious, such as e.g. `var x = 10`: we know that `10` is an `int`, so it's trivially clear that `x` is an `int` too. Good type systems are designed to push this idea as far as possible. If you're using a function that requires particular operations on its input, a type checker can see these constraints and infer the right type. Similarly, if the output of a function is used in a particular way, the type checker can see this and make inferences.

There are statically typed programming languages where writing down types is actually fairly rare. They're usually only necessary at a few key points. Or necessary to add specific constraints to ensure that a particular thing is only used in a more specific way.

For example, you may have a language where an operator like `+` is defined for both floating point and integer division as well as concatenating strings and lists together. You may write something like `def add(x) = x + x`, which could be inferred to be ok to work with anything where `+` is defined. But, you may actually want to ensure that this actually only works for floating point numbers, so you put in a `x: float` in the parameter list. You might want to do this so that your compiler can realize that it can implement this `add` function using very fast floating point hardware instructions.


Note that types are just one way of embracing mathematical rigor. They are very good at representing simple properties of code and checking them very quickly, but they are very very hard to use to represent more complex properties (say, specifying that a function takes an array and a comparison function and returns the same array in an order defined by the comparison function).

No language in common use today asserts anything beyond the most basic properties of a program using types. Idris and Coq do that, but they are extremely obscure even compared to Haskell or OCaml.

It's also important to note that types are quite rarely used outside formal computer science. For example, even in theoretical physics, types are not commonly used in published proofs (and no, measurement units are not types).


I'm glad you brought up languages that use dependent typing! And sure, static typing is just one tool of many: so it's not going to be the end-all-be all in the quest to answer the eternal question: "does my program work the way I think it does?" More refined, my claim is that it is a super powerful tool that brings substantial benefits. And that it's a mistake to not use this tool.

No tool is perfect. There will always be room for improvement as we use our current crop of tools, observe how they serve our needs and don't serve them, then use that knowledge to build even better tools.

> No language in common use today asserts anything beyond the most basic properties of a program using types. Idris and Coq do that, but they are extremely obscure even compared to Haskell or OCaml.

With enough time, we'll see the influence of dependent typing crop up in more common languages. For instance, there's the Rust crate dfdx, which uses dependent typing to encode the shapes of tensors. [1]

> It's also important to note that types are quite rarely used outside formal computer science.

This isn't quite right. At least, I think what you're saying is that "physics proofs don't set up typing rules therefore they don't use types." But then again, proofs aren't necessarily programs. Is every physics proof computable? I'm not a physicist so I'll defer that to an expert. My understanding is that physicists aren't interested in computability when they're communicating their ideas about the universe. For the subset of proofs that are computable, I'll make the argument that there exist types on the mathematical structures and operations on them that they're defining. These types needn't be written type and formalized with typing rules to exist.

[1] https://docs.rs/dfdx/latest/dfdx/


Physics has a simple form of types, namely units. Checking units at the end of solving a problem was emphasized by every physics professor I ever had.

As for dependent types, they're only one of the ways to express mathematical properties of programs. Check out what Ada/spark does for example. A layer of logic on top of a program doesn't have to be embedded in the types (and if your language is imperative you probably don't want that anyway).


Measurement units are not types. They are conceptually similar, but the way they are used is actually entirely different. You can encode units into types, but it gets ugly fast once you do anything more complex than addition and multiplication, so no one does it actually. You won't find any matrix library that uses types to encode the measurement units of every element of a matrix the way you would in a regular physics computation.

The way measurement units are normally used is essentially like special dimensionless constants that algebraic operations except +/- work on normally. So, you can multiply measurement units, you can divide them, a measurement unit divided by itself equals 1, you can raise a measurement unit to a power etc.


I agree that units aren't types. However, I think there's validity to thinking about concepts such as acceleration vs. speed vs. mass etc. as distinct types.

Each of those can be specified with different units from different systems of measurement. Some of those are compatible with one another, given some transformations. E.g. for speed, one can convert X m/s into MPH: different units, but same type.

However, mass is incompatible with acceleration: there's no possible valid transformation for Y grams into m/^2. If I was writing a program, I would like the language to prevent me from putting in a mass value into a place where I need a velocity. If I was writing a physics "proof," it would similarly be invalid to intermix values from these two distinct concepts.


I agree that types are a decent way of representing measurement units for many common use cases.

But I think the difference between the two is more fundamental. It's true of course that you can't transform a mass into a speed.

However, the interesting thing is that you can multiply 5kg by 10 meters/second and get 50m×kg/s, and this is a meaningful operation (in this case, you get a momentum). You can then divide this by 5s and get 10m×kg/s² which is still meaningful (a change in momentum). Even more, you can now divide this by 5kg/s (a change in mass), and get 2m/s (speed).

The way multiplication and division work with measurement units, you can't easily define them as functions that take the measurement units of their parameters as types, since you'd constantly get new types. E.g. the type of `x = foldl (×) listOfMeterQuantties` would be `meter^len(listOfMeterQuantties)`. You could do this in a dependently-typed language of course, but it would be quite unnatural I think.

Essentially, the way you'd probably represent all this is to have a single type family that has a numeric parameter for every measurement unit you want to be able to represent. It would be something like PhysicalQuantity<X, Y, Z, W> == 1 m^X s^Y kg^Z k^W. + and - would be defined on pairs of PhysicalQuantity<X, Y, Z, W> and PhysicalQuantity<X, Y, Z, W>. × and / would be defined on pairs of PhysicalQuantity<X1, Y1, Z1, W1> and PhysicalQuantity<X2, Y2, Z2, W2> and they would produce PhysicalQuantity<X1+X2, Y1+Y2, Z1+Z2, W1+W2> (or - for division). How usable this would then be (i.e. the type of "1m" would be PhysicalQuantity<1,0,0,0>) is debatable. But even if it would be good enough, I think it is quite apparent that the type representation is nowhere near intuitive or direct.


> More refined, my claim is that it is a super powerful tool that brings substantial benefits. And that it's a mistake to not use this tool.

I mostly agree with this. I wouldn't say it's super powerful, but it's a super convenient tool that quickly eliminates a broad class of errors without that much effort or contortions in our programs. I do think that the sweetspot for types is something like OCaml or Rust, and that going even into Haskell style (monads) starts to expend too much effort on the types compared to functionality.

I am hopeful that other techniques will come along and complement types to more easily statically check more complex properties of our code. I've been hearing great things about TLA+ style property-based specifications that I'd like to try at some point.

> For instance, there's the Rust crate dfdx, which uses dependent typing to encode the shapes of tensors. [1]

I don't think shapes of tensors is a particularly complex property. It's nice that this is enforced, but I still see it in the area of very simple properties. Rust's borrow checking is actually a better example of a more complex property of a program being statically verified.

> This isn't quite right. At least, I think what you're saying is that "physics proofs don't set up typing rules therefore they don't use types." But then again, proofs aren't necessarily programs. Is every physics proof computable? I'm not a physicist so I'll defer that to an expert.

Physics is basically all about computation, and very little about proof. The point of physics is usually to compute a property of a physical system based on some model, then perform an experiment and measure the same property. If the measurement agrees with the computation, that's a "proof" that the model is "correct" (in quotes because of course these are not what we normally mean by those words in formal contexts).

And in those computations, physicists don't use types in any way you might see in a computer science paper in their computations. Measurement units are somewhat similar at a very high level, but they are used very differently in practice.

I go into a little more detail on this in [0].

[0] https://news.ycombinator.com/item?id=38692762


Thanks for a very thoughtful reply! (Feels like real HN. I appreciate that.)

It seems like we're not in too much disagreement. I also see Rust's borrow checker as a real engineering achievement. They were able to take lessons learned discovering affine type systems and apply in in a way that gives Rust programs some tremendously wonderful properties. Whether it's making sure that the ML code doesn't do something invalid or whether all of the code doesn't make memory errors, I still see a never-ending march of progress on investing more into type systems and static analysis.

I appreciate your insights on Physics. I understand enough that Quantum Physics takes the view (more or less, I believe!) that the universe is akin to a computer, with particles (bits) that behave in specific ways and determine how everything works (computation, if I can continue the metaphor).

I replied to your linked comment. But I think there's perhaps a misunderstanding of what types are. I agree: units aren't types. However, a

> a property of a physical system based on some model

counts as a value that has a type. Perhaps the whole understanding is that Physics has a _very complex type system_. Perhaps one that no one has invested in trying to make computable?


Yes, I think we agree quite a bit, and for me as well this is exactly the type of conversation I come to HN for.

I replied more in depth on the measurement units post on the specifics about that, so I'll leave it aside here.

> I appreciate your insights on Physics.

About physics though, I would say that when I'm talking about computations I'm not describing some complex or deep idea about the universe acting as a computer. I just mean that most of what you do as a physicist is simply to do computations, much like we would in school. You don't write proofs the way most mathematicians do: you take some functions (say, Newton's laws of motion) and an initial state, and compute the state of the system after some time or at some event. Your result is not a proof that X = Y, it is simply the value Y, which you didn't know beforehand. Ideally, you might write a program that automates this work for you, so that instead of doing a computation by hand, you ask a computer to do it for you.

> But I think there's perhaps a misunderstanding of what types are. I agree: units aren't types. However, a

>> a property of a physical system based on some model

> counts as a value that has a type.

I don't think this is the right way to look at it. Types are a specific concept in computer science/math and they act in certain ways. They are for example isomorphic to propositions in first order logic (the Church-Howard correspondence if I remember correctly), but that doesn't mean they are the same thing.


(2018)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: