Conversation
|
I have used ccache for the past decade, and I have never had any problems with it, it just works and does exactly what it is meant to do without any fuss. It's also a well-established software by now, there's support for it in CMake and probably a bunch of other build systems. Despite making such a glowing recommendation for ccache, I would still avoid using it when building release binaries, but that is just to be certain that release binaries are pristine. |
|
Out of curiosity: Why would I need this exactly? To my, maybe limited, understanding this would speed up compilations when first removing all built files and then building all targets again. But is this an often occurring workflow? And your current PR doesn't allow a change of action depending on whether a debug, test or release build is compiled. |
You are right, ccache can't do anything about things it hasn't seen before, so the 5000 (?) lines of genuinely new code you write in a day has to be compiled by the compiler. With that said, chances are that the dependencies specified by the Makefiles are partial/incomplete, so the build system will either be sound and rebuild some extra files just in case, or it will be unsound and sometimes force you to manually clean out the build directory to everything that needs to be rebuilt rebuilt. Both those use-cases are helped by ccache. Even if the build system for FORCE is perfect, there is still a decent chance that a file after preprocessing will be identical to the previous compilation, and ccache helps in that case. If you for some reason don't like ccache you can just not have it installed, or
The PR allows changing action on every make invocation ( I have never experienced a bug where ccache has given me the wrong result, but I still think that the release workflow should ensure that ccache isn't used to build the release binaries. This means either not installing ccache in the release workflow, or the release workflow calling make with |
|
Thanks for the additional explanation, I think I got a rough idea of what this is trying to achieve. To be honest, this reads like there are a lot of "ifs" and "maybes" and proposing a solution looking to solve a possibly non-existing problem as it says nothing about an existing problem in the FORCE code base. Or am I missing some previously discussed compilation issues? If there are problems in the dependency specification which cause Make to re-compile source files that are unchanged, why not improve the Makefile itself? Additionally, according to their documentation, ccache does not work for multi-file compilation and linkage in general and may miss newly included header files in some occasions. But I'm also not the person to decide on code additions in the end :) Cheers |
|
yeah, the bit on |
|
IIRC, Github supports caching to certain degrees with dedicated actions. But as I have never used it, I can't speak about it's usefulness in general. The example github provides strongly focuses on expensive dependency installations which is not applicable for FORCE. I personally have never experienced issues in compilation/build times as well and apart from my thoughts above still wonder what problem would be solved. I'd also join David's concern regarding uncertainty/trust, i.e. reduced understanding of what happens during compilation. |
I obviously don't know the real risk of adding ccache, but the fact that I've never had any problems despite extensive use both by myself and colleagues (+nightly CI builds + PR builds) should at least make it plausible that the actual risk is quite low, but the risk is obviously not zero so it would be stupid of me to claim that. My recommendation is based on perceived risk vs reward; you do test builds all the time so speeding that up will mean you get to go home earlier (or you get more work done), but you do releases relatively infrequently so speeding those up will only get you to go home earlier one day per quarter. The reason I did this PR is first and foremost because it helps me, and my gut feeling is that it would also improve the situation for others who compile FORCE regularly. But if people don't want to use ccache I'm not going to force them, you can avoid installing ccache at all on your system, and even if you install ccache I have described how to still not use it even with this PR merged. |
|
I can only speak from my personal experiences over the last couple of years: Waiting for the compilation of FORCE has never been an issue. Neither in automated builds here on Github nor local ones. When working on changes, "caching" by Make was sufficient. This weighs personal experience against personal experience, nonetheless I stand by my feeling that this adds a layer of complexity and uncertainty without showing that a prevalent/serious issue is solved. We (or rather David) would add the possibility of errors for unclear performance (if you'd like to call them that) reasons. This is exaggerated that the change is documented nowhere else, it silently adds a feature and calls it a day! My experience with FORCE has always been a strong focus on "regulated changes" and care towards changes/updates to mentally be on top of the code base. None of the contributors churns out thousands of lines of code per day, there are no large CI/CD pipelines, no large amount on incoming PRs and no nightly/frequent builds. Frankly speaking, the second paragraph sounds somewhat weird overall. Cheers, |
This corresponds to CMake's <LANG>_COMPILER_LAUNCHER variable. The variable is never set by anything in this repository, so everything behaves the same way before and after this commit.
3fbd958 to
b85c36d
Compare
|
It seems the contention is around the auto-detection part of the PR, so I have removed those parts of the PR. Since the variable is never set by anything, everything behaves exactly the same as before. |
|
Ok, I have to wonder if you use LLMs to generate PRs, comments, issues and so on in this repository and the plethora of others you are active in. Anyhow, ...
There were multiple concerns raised only one of which you try to address with your new PR. The main issues were correctness of compilation and (from my point of view more importantly) the why this is needed in this repo. If your reply regarding correctness of compilation would be "everything behaves exactly the same as before", I'd like to pose the question: Why include it in the first place then? Constant re-compilation is not the reason why FORCE exists. Cheers, |
No, I have never used an LLM.
Ok, I'll clarify:
In both cases, ccache detects the circumstance and stays out of the way, so this "degrades" to normal compilation/linking and is no worse or better than things currently are.
Not sure what this refers to so I need a link to the part of the documentation where you read that. Taking a step back and looking at the big picture: any software that isn't formally verified has bugs, and formally verified software like CompCert has had bugs in areas where it interfaces with the operating system (which isn't formally verified). This turns building software into an exercise in risk management, and deciding what an acceptable level of risk is. I don't know your exact situation so I haven't stated anything about what level of risk you should accept, you know the details of your situation better than I do. My experience across tens of thousands of compilations across millions of lines of code in half a dozen of different projects is that everything works, but despite that I provided a recommendation of caution and sound engineering with risk elimination when building releases. If you decide to go to the extreme ends of either never using ccache at all for yourself, or build everything including releases with ccache I'm not going to argue; if I have the knobs available I can easily adjust choices I disagree with locally.
It's free performance so I'm struggling to see why anyone would oppose that. But even if you want things to be slower for yourself, why do you want to impose that on everybody else instead of giving them the option to decide for themselves? |
|
All right, thanks for your reply 👍
I took that from their documentation here. As I'm not accustomed with I'm aware that non-formalized/not verified software always comes with the risk of containing bugs and didn't want to formulate the hubris that FORCE is an exception to that. It's clear, that this is not the case. What irritates me are/were the proposed benefits of including I'd argue against it being "free" performance because there's still something that needs to happen in order to speed up compilation, but I get the point. As for the question: You may want to call it arrogance, stupidity, overconfidence in my own knowledge or whatever and I'm sorry for currently not being able to formulate a better response than this: A mixture of the notion we can, therefore we must and question who needs this right now for what?. Maybe I can come up with a better formulation, maybe not; I'll see. Lastly, to re-iterate a point I made earlier: Me currently opposing this change does not mean it's not seen as a welcoming change to others nor that it won't be included in FORCE! I'm not in a position to decide on any of that. Cheers, |
|
guys, who has time to read all of this... Keep it professional and friendly please. I see valid points from both of you. And the reason why I did not merge this PR yet is that I also have some concerns. In particular: if you do not fully trust it (which I pointed to in the comment I quoted from @pjonsson earlier), why use it in the first place. I do not have time right now to delve that deeply into it to make a decision here. Therefore, this is stalled for the moment. I will pick it up when time allows. |
This corresponds to CMake's
<LANG>_COMPILER_LAUNCHER variable.
The variable is never set by anything
in this repository, so everything
behaves the same way before and after
this commit.