Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No. Native python ops in string suck in performance. String support is absolutely interesting and will enable abstractions for many NLP and LLM use cases without writing native C extensions.


> Native python ops in string suck in performance.

That’s not true? Python string implementation is very optimized, probably have similar performance to C.


It is absolutely true that there is massive amounts of room for performance improvements for Python strings and that performance is generally subpar due to implementation decisions/restrictions.

Strings are immutable, so no efficient truncation, concatenation, or modifications of any time, you're always reallocating.

There's no native support for a view of string, so operations like iteration over windows or ranges have to allocate or throw away all the string abstractions.

By nature of how the interpreter stores objects, Strings are always going to have an extra level of indirection compared to what you can do with a language like C.

Python strings have multiple potential underlying representations, and thus have some overhead for managing and dealing with those multiple representations without exposing those details to user code


There is a built in memoryview. But it only works on bytes or other objects supporting the buffer protocol, not on strings.


stringzilla[1] has 10x perf on some string operations - maybe they don't suck, but there's definitely room for improvement

[1] - https://github.com/ashvardanian/StringZilla?tab=readme-ov-fi...


For numpy applications you have to always box a value to get a new python string. It quite far from fast.


Yeah, operating on strings has historically been a major weak point of Numpy's. I'm looking forward seeing benchmarks for the new implementation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: