And Objective-C is a credible C competitor, partly because it is a true superset of C, partly because you can get it to any performance level you want (frequently faster than equivalent practical C code [2]) and it was even used in the OS kernel in NeXTStep.
Now obviously it's not done, as it is a true superset and thus inherits all of C's non-safety, and if you were to just use the id-subset that is memory safe, you wouldn't be fully competitive.
However, it does show a fairly clear path forward: restrict the C part of Objective-C so that it remains safe, let all the tricky parts that would otherwise cause non-safety be handled by the id-subset.
That is the approach I am taking with the procedural part of Objective-S[3]: let the procedural part be like Smalltalk, with type-declarations allowing you to optimize that away to something like Pascal or Oberon. Use reference counting to keep references safe, but potentially leaky in the face of cycles. Optional lifetime annotations such as weak can be used to eliminate those leaks and to eliminate reference counting operations. Just like optional type declarations can reduce boxing and dynamic dispatch.
[1] https://blog.metaobject.com/2014/05/the-spidy-subset-or-avoi...
[2] https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...
https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...
The Codable stuff was this:
https://blog.metaobject.com/2020/04/somewhat-less-lethargic-...
As shown and discussed in the series, @objc is not an issue. Do you have reason to believe that Swift performance in general or Codable in particular have improved since?
I mean, you can run these sorts of tests yourself, and you probably should. It's not that hard.
2. Java tends to at least around 2x slower than C.
3. For my book[1], I wrote a chapter on Swift and did quite a bit of benchmarking. Just how bad Swift was actually surprised me, and I wasn't expecting it to be particularly good.
4. Case study: https://blog.metaobject.com/2020/04/somewhat-less-lethargic-...
Even the worst Objective-C implementation I could think of (really comically bad) was 3.8x the speed of Swift's compiler-supported super-duper-sophisticated implementation. With a bit of work I took that to around 20x.
[1] https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...
For more details, see iOS and macOS Performance Tuning: Cocoa, Cocoa Touch, Objective-C, and Swift, Addison-Wesley
https://www.amazon.com/iOS-macOS-Performance-Tuning-Objectiv...
If you don't want to read a book, here is an example as a series of blog posts: https://blog.metaobject.com/2020/04/somewhat-less-lethargic-...
I'd like to refine the advice given a little bit, an approach I like to call "mature optimization". What you need to do ahead of time is primarily to make sure your code is optimizable, which is largely an architectural affair. If you've done that, you will be able to (a) identify bottlenecks and (b) do something about them when the time comes.
Coming back to the Knuth quote for a second, not only does he go on to stress the importance of optimizing that 3% when found, he also specifies that "We should forget about small efficiencies, say about 97% of the time". He is speaking specifically about micro-optimizations, those are the ones that we should delay.
In fact the entire paper Structured Programming with goto Statements[1] is an ode to optimization in general and micro-optimization in particular. Here is another quote from that same paper:
“The conventional wisdom [..] calls for ignoring efficiency in the small; but I believe this is simply an overreaction [..] In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering."
That said, modern hardware is fast. Really fast. And the problems we try to solve with it tend towards the simple (JSON viewers come to mind). You can typically get away with layering several stupid things on top of each other, and the hardware will still bail you out. So most of the performance work I do for clients is removing 3 of the 6 layers of stupid things and they're good to go. It's rare that I have to go to the metal.
Anyway, if you're interested in this stuff, I've given talks[2] and written a book[3] about it.
[1] http://sbel.wisc.edu/Courses/ME964/Literature/knuthProgrammi...
[2] https://www.youtube.com/watch?v=kHG_zw75SjE&feature=youtu.be
[3] https://www.amazon.com/iOS-macOS-Performance-Tuning-Objectiv...
That's always the answer: "the compiler has gotten better and will get better still". Your claim was that Objective-C has all this "extra work" and indirection, but Swift actually has more places where this applies, and pretends it does not. With Objective-C, what you see is what you get, the performance model is transparent and hackable. With Swift, the performance model is almost completely opaque and not really hackable.
>None of the above is possible in Objective-C, though, because of its type system.
What does the "type system" have to do with any of this? It is trivial to create, for example, extremely fast collections of primitive types with value semantics and without all this machinery. A little extra effort, but better and predictable performance. If you want it more generically, even NeXTSTep 2.x had NXStorage, which allowed you to create contiguous collections of arbitrary structs.
Oh...people seem to forget the Objective-C has structs. And unlike Swift structs they are predictable. Oh, and if you really want to get fancy you can implement poor-man's generics by creating a header with a "type variable" and including that in your .m file with the "type variable" #defined. Not sure I recommend it, but it is possible.
The fact the Foundation removed these helpful kinds of classes like NXStorage and wanted to pretend Objective-C is a pure OOPL is a faulty decision by the library creators, not a limitation of Objective-C. And that Foundation was gutted by CoreFoundation, making everything even slower still was also a purely political project.
In general, you seem to be using "Objective-C" in this pure OOPL sense of "Objective-C without the C" (which is kind of weird because that is what Swift is supposed to be, according to the propaganda). Objective-C is a hybrid language consisting of C and a messaging layer on top. You write your components in C and connect them up using dynamic messaging. And even that layer is fairly trivial to optimize with IMP-caching, object-caching and retain/release elision.
Chapter 9 goes into a lot of details on Swifft: https://www.amazon.com/gp/product/0321842847/
A few Swift issues surprised me, to be honest. For example native Swift dictionaries with primitive types (should be a slam dunk with value types and generics) are significantly slower than NSDictionary from Objective-C, which isn't exactly a high performance dictionary implementation. About 1.8x with optimizations, 3.5x without.
This is another point. The gap between Swift and Objective-C widens a lot with unoptimized code. Sometimes comically so, 10x isn't unusual and I've seen 100x and 1000x. This of course means that optimized Swift code is a dance on the volcano. Since optimizations aren't guaranteed and there are no diagnostics, your code can turn into a lead balloon at any time.
And of course debug builds in Xcode are compiled with optimization off. That means for some code either (a) the unoptimized build will be unusable or (b) all those optimizations actually don't matter. See "The Death of Optimizing Compilers" by D.J. Bernstein.
Anyway, you asked for some links (without providing any yourself):
https://github.com/helje5/http-c-vs-swift
https://github.com/bignerdranch/Freddy/wiki/JSONParser
"Several seconds to parse 1.5MB JSON files"
https://github.com/owensd/swift-perf
But really, all you need to do is run some real-world code.
You also mention looking at the assembly output of the Swift compiler to tune your program. This alone should be an indication that either (a) you work on the Swift compiler team or (b) you are having to expend a lot more effort on getting your Swift code to perform than you should. Or both.
Since it's a single hybrid language, it's trivial to remove slower features fro, performance-intensive parts.
See my UIKonf talk https://www.youtube.com/watch?v=kHG_zw7%205SjE&feature=youtu...
Or my book: https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...
What I should have added is that invariably, the problem would be in one of these places that they had eliminated by reasoning.
While you obviously need to think about your code, otherwise you can't formulate useful hypotheses, you then must validate those hypotheses. And if you've done any performance work, you will probably know that those hypotheses are also almost invariably wrong. Which is why performance work without measurement is usually either useless or downright counterproductive. Why should it be different for other aspects of code?
Again, needing to form hypotheses is obviously crucial (I also talk about this in my performance book, iOS and macOS Performance Tuning [1]), I've also seen a lot of waste in just gathering reams of data without knowing what you're looking for.
That's why I wrote experimentalist, not "data gatherer". An experiment requires a hypothesis.
[1] https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...
Not really. Apple's propaganda is fast, Swift can sometimes be almost kinda fast if the stars align just right, but in general it is quite slow. For example, last I checked, Kitura's HTTP parser is written in C. And has to be.
Another one: the various JSON "parsers" that wrap the built-in NSJSONSerialization API add about an order of magnitude overhead. That is after all the actual parsing and conversion to a property list, which isn't exactly the most efficient representation in the first place.
The Big Nerd Ranch guys realized that the only way to get a "high performance" JSON parser is to do it 100% in Swift. They did that and the result is significantly faster than the wrappers. And only 4x slower than NSJSONSerialization, which again isn't exactly a very efficient parsing model (think XML DOM parser).
https://github.com/bignerdranch/Freddy/wiki/JSONParser
I do a more in-depth analysis in my book, "iOS and macOS Performance Tuning"
https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...
EDIT: I forgot the link to Freddy
For example, I wrote "iOS and macOS Performance Tuning: Cocoa, Cocoa Touch, Objective-C, and Swift"[1][2] using LaTeX, and I think it came out rather well (Pearson has some pretty amazing LaTeX compositors that took my rough ramblings and turned them into something beautiful).
Quite a while ago, I also used TeX (not LaTeX, IIRC) as part of the typesetting backend of a database publishing tool for the international ISBN agency, to publish the PID (Publisher's International Directory). This was a challenging project. IIRC, each of the directories (there were several) was >1000 pages, 4 column text in about a 4 point font. Without chapter breaks. My colleagues tried FrameMaker first on a subset, they let it run overnight and by morning it had kernel-panicked the NeXTStation we were running it on. The box had run out of swap.
TeX was great, it just chugged away at around 1-4 pages per second and never missed a beat. Customer was very happy. The most difficult part was getting TeX not to try so hard to get a "good layout", which wasn't possible given the constraints and for these types of entries just made everything looks worse.
[1] https://www.pearsonhighered.com/program/Weiher-i-OS-and-mac-...
[2] https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...
"Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified."
Above the famous quote:
"The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny- wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies."
All this from "Structured Programming with Goto Statements"[1], which is an advocacy piece for optimization. And as I've written before, we as an industry typically squander many orders of magnitude of performance. An iPhone has significantly more CPU horsepower than a Cray 1 supercomputer, yet we actually think it's OK that programs have problems when their data-sets increase to over a hundred small entries (notes/tasks/etc.).
Anyway, I write a lot more about this in my upcoming book: "iOS and macOS Performance Tuning: Cocoa, Cocoa Touch, Objective-C, and Swift"[2]
[1] http://sbel.wisc.edu/Courses/ME964/Literature/knuthProgrammi...
[2] https://www.amazon.com/iOS-macOS-Performance-Tuning-Objectiv...
We are in an effective post-Moore's law world, and have been for a couple of years. Yes, we can still put more transistors on the chip, but we are pretty much done with single core performance, at least until some really big breakthrough.
On the other hand, as another poster pointed out, we really don't need all that much more performance, as most of the performance of current chips isn't actually put to good use, but instead squandered[1]. (My 1991 NeXT Cube with 25 MHz '40 was pretty much as good for word processing as anything I can get now, and you could easily go back further).
Most of the things that go into squandering CPU don't parallelize well, so removing the bloat is actually starting to become cheaper again than trying to combat it with more silicon. And no, I am not just saying that to promote my upcoming book[2], I've actually been saying the same thing since before I started writing it.
Interesting times.
[1] https://www.microsoft.com/en-us/research/publication/spendin...
[2] https://www.amazon.com/MACOS-PERFORMANCE-TUNING-Developers-L...
However, it is important not to conflate "scripting language" and "dynamic language" and "interpreted". While there is some correlation there, it is not a necessary one.
Objective-C is an example of a fast AOT-compiled pretty dynamic language, and WebScript was an interpreted scripting language with pretty much identical syntax and semantics.[2]
What do I mean with fast? In my experience, Objective-C can be extremely fast [3], though it can also be used very much like a scripting language and can also be used in ways that are as slow or even slower than popular scripting languages. That range is very interesting.
So I don't actually think the tradeoff you describe between low-level unergonomic fast and high-level ergonomic slow is a necessary one, and one of the goals of Objective-S is to prove that point.[4]
So far, it's looking very good. Basically, the richer ways of connecting components appear to allow fairly simple "scripted" connections to achieve reasonably high performance [5]. However, I now have a very simple AOT compiler (no optimizations whatsoever!) and that gives another factor 2.6 [6].
Steve Sinowsky wrote: "Does developer convenience really trump correctness, scalability, performance, separation of concerns, extensibility, and accidental complexity?"[7].
I am saying: how about we not have to choose?
And I'd much rather debug/modify semantically rich, high-level code that my LLM generated.
[1] https://blog.metaobject.com/2015/10/jitterdammerung.html
[2] https://blog.metaobject.com/2019/12/the-4-stages-of-objectiv...
[3] https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...
[4] https://objective.st
[5] https://blog.metaobject.com/2021/07/deleting-code-to-double-...
[6] https://dl.acm.org/doi/10.1145/3689492.3690052
[7] https://darkcoding.net/research/IEEE-Convenience_Over_Correc...