Skip to content

Commit 441f9e8

Browse files
Fix links everywhere (#16880)
Fix broken documentation URLs and add lint-ignore comments for URLs that fail due to timeouts or authentication requirements. Fixed 404 broken URLs: - Updated torch.export documentation URLs from pytorch.org/docs/stable/export.html to docs.pytorch.org/docs/stable/user_guide/torch_compiler/export.html - Fixed ARM Ethos-U backend URL to use new path structure (/backends/arm-ethos-u/arm-ethos-u-overview.html) - Fixed XNNPACK internals URL (xnnpack-internals.html → xnnpack-arch-internals.html) - Fixed XNNPACK delegate lowering tutorial URL - Fixed Qualcomm backend URL Added @lint-ignore for URLs that can't be automatically verified: - HuggingFace gated model URLs (require authentication, return 401) - PyTorch HUD URLs (timeout in CI environment, return 000) Other cleanup: - Removed outdated paragraph referencing non-existent test_xnnpack_qnnpack.py file from delegate documentation Test plan - Verify link-check / lint-urls CI job passes - Spot-check a few of the updated URLs manually Co-authored-by: Mergen Nachin <[email protected]>
1 parent 3c9e902 commit 441f9e8

File tree

28 files changed

+41
-62
lines changed

28 files changed

+41
-62
lines changed

.ci/scripts/benchmark_tooling/get_benchmark_analysis_data.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ class BenchmarkFilters:
104104

105105
BASE_URLS = {
106106
"local": "http://localhost:3000",
107-
"prod": "https://hud.pytorch.org",
107+
"prod": "https://hud.pytorch.org", # @lint-ignore
108108
}
109109

110110

.ci/scripts/tests/test_get_benchmark_analysis_data.py

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,9 @@ def setUp(self):
179179
def test_init(self):
180180
"""Test initialization of ExecutorchBenchmarkFetcher."""
181181
self.assertEqual(self.fetcher.env, "prod")
182-
self.assertEqual(self.fetcher.base_url, "https://hud.pytorch.org")
182+
self.assertEqual(
183+
self.fetcher.base_url, "https://hud.pytorch.org" # @lint-ignore
184+
)
183185
self.assertEqual(
184186
self.fetcher.query_group_table_by_fields,
185187
["model", "backend", "device", "arch"],
@@ -193,7 +195,9 @@ def test_init(self):
193195

194196
def test_get_base_url(self):
195197
"""Test _get_base_url method."""
196-
self.assertEqual(self.fetcher._get_base_url(), "https://hud.pytorch.org")
198+
self.assertEqual(
199+
self.fetcher._get_base_url(), "https://hud.pytorch.org" # @lint-ignore
200+
)
197201

198202
# Test with local environment
199203
local_fetcher = self.module.ExecutorchBenchmarkFetcher(env="local")

CONTRIBUTING.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -324,7 +324,8 @@ the code you're modifying and find an author who has more context. Ask them
324324
for their help in the PR comments.
325325

326326
### Continuous Integration
327-
See https://hud.pytorch.org/hud/pytorch/executorch/main for the current state of
327+
328+
See https://hud.pytorch.org/hud/pytorch/executorch/main for the current state of <!-- @lint-ignore -->
328329
the CI (continuous integration) jobs. If `main` is broken, consider rebasing
329330
your PR onto the `release/1.1` branch, which points to the most recent
330331
all-green commit.

backends/openvino/quantizer/quantizer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ def _get_unified_scales_root_quantizer_id(
295295
"""
296296
Identifies the earliest quantizer node ID based on the corresponding `nncf_node.node_id`
297297
in the given NNCFGraph. This is required by the `_get_obs_or_fq_map` function.
298-
Refer to: https://github.com/pytorch/pytorch/blob/main/torch/ao/quantization/pt2e/prepare.py#L291
298+
Refer to: https://github.com/pytorch/ao/blob/main/torchao/quantization/pt2e/prepare.py
299299
300300
:param nncf_graph: The NNCFGraph instance.
301301
:param quantizer_ids: The list of quantizer IDs to evaluate.

backends/xnnpack/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -133,5 +133,5 @@ create an issue on [github](https://www.github.com/pytorch/executorch/issues).
133133

134134
## See Also
135135
For more information about the XNNPACK Backend, please check out the following resources:
136-
- [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack)
137-
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backends/xnnpack/backend-delegates-xnnpack-reference)
136+
- [XNNPACK Backend](https://docs.pytorch.org/executorch/main/backends/xnnpack/xnnpack-overview.html)
137+
- [XNNPACK Backend Internals](https://docs.pytorch.org/executorch/main/backends/xnnpack/xnnpack-arch-internals.html)

docs/source/Doxyfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1535,7 +1535,7 @@ DOCSET_PUBLISHER_NAME = Publisher
15351535
# a.o. the download links, offline the HTML help workshop was already many years
15361536
# in maintenance mode). You can download the HTML help workshop from the web
15371537
# archives at Installation executable (see:
1538-
# http://web.archive.org/web/20160201063255/http://download.microsoft.com/downlo
1538+
# http://web.archive.org/web/20160201063255/http://download.microsoft.com/downlo @lint-ignore
15391539
# ad/0/A/9/0A939EF6-E31C-430F-A3DF-DFAE7960D564/htmlhelp.exe).
15401540
#
15411541
# The HTML Help Workshop contains a compiler that can convert all HTML output

docs/source/archive/backends-cadence-legacy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ python3 -m examples.cadence.operators.quantized_<linear,conv1d>_op
137137

138138
***Small Model: RNNT predictor***:
139139

140-
The torchaudio [RNNT-emformer](https://pytorch.org/audio/stable/tutorials/online_asr_tutorial.html) model is an Automatic Speech Recognition (ASR) model, comprised of three different submodels: an encoder, a predictor and a joiner.
140+
The torchaudio [RNNT-emformer](https://docs.pytorch.org/audio/stable/generated/torchaudio.pipelines.EMFORMER_RNNT_BASE_LIBRISPEECH.html) model is an Automatic Speech Recognition (ASR) model, comprised of three different submodels: an encoder, a predictor and a joiner.
141141
The [predictor](https://github.com/pytorch/executorch/blob/main/examples/cadence/models/rnnt_predictor.py) is a sequence of basic ops (embedding, ReLU, linear, layer norm) and can be exported using:
142142

143143
```bash

docs/source/backends-cadence.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ In all cases the generated file is called `CadenceDemoModel.pte`.
197197

198198
***Speech/Audio Models***:
199199

200-
The torchaudio [RNNT-emformer](https://pytorch.org/audio/stable/tutorials/online_asr_tutorial.html) model is an Automatic Speech Recognition (ASR) model, comprised of three different submodels:
200+
The torchaudio [RNNT-emformer](https://docs.pytorch.org/audio/stable/generated/torchaudio.pipelines.EMFORMER_RNNT_BASE_LIBRISPEECH.html) model is an Automatic Speech Recognition (ASR) model, comprised of three different submodels:
201201

202202
- **RNNT Predictor**: Sequence of basic ops (embedding, ReLU, linear, layer norm)
203203
```bash

docs/source/compiler-delegate-and-partitioner.md

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -187,18 +187,6 @@ exported_program_backend_1 = to_backend(exported_program, backend_1_parititioner
187187
exported_program_backend_1_and_2 = to_backend(exported_program_backend_1, backend_2_parititioner())
188188
```
189189

190-
A more concrete example be found
191-
[here](https://github.com/pytorch/executorch/blob/main/exir/backend/test/demos/test_xnnpack_qnnpack.py).
192-
In this example,
193-
qnnpack is one backend and xnnpack is another backend. We haven't open-sourced
194-
these two backends delegates yet, and this example won't run out of box. It can
195-
be used as a reference to see how it can be done.
196-
197-
This option is easy to try because usually all backends will implement their own
198-
partitioner. However this option may get different results if we change the
199-
order of to_backend call. If we want to have a better control on the nodes, like
200-
which backend they should go, option 2 is better.
201-
202190
*Option 2: Have a partitioner which partitions for different backends*
203191

204192
Another option is to create a customized partitioner, say partitioner

docs/source/getting-started-architecture.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ leverages PyTorch 2 compiler and export functionality
2222
[AOTAutograd](https://pytorch.org/functorch/stable/notebooks/aot_autograd_optimizations.html),
2323
[Quantization](https://pytorch.org/docs/main/quantization.html),
2424
[dynamic shapes](https://pytorch.org/get-started/pytorch-2.0/#pytorch-2x-faster-more-pythonic-and-as-dynamic-as-ever),
25-
[control flow](https://pytorch.org/docs/main/export.html#data-shape-dependent-control-flow),
25+
[control flow](https://docs.pytorch.org/docs/stable/user_guide/torch_compiler/export.html#data-shape-dependent-control-flow),
2626
etc.) to prepare a PyTorch program for execution on devices.
2727

2828
Program preparation is often simply called AOT (ahead-of-time) because export, transformations and compilations to the program are performed before it is eventually run with the ExecuTorch runtime, written in C++. To have a lightweight runtime and small overhead in execution, we push work as much as possible to AOT.
@@ -33,14 +33,14 @@ Starting from the program source code, below are the steps you would go through
3333

3434
* Like all PyTorch use cases, ExecuTorch starts from model authoring, where standard `nn.Module` eager mode PyTorch programs are created.
3535
* Export-specific helpers are used to represent advanced features like [control
36-
flow](https://pytorch.org/docs/main/export.html#data-shape-dependent-control-flow)
36+
flow](https://docs.pytorch.org/docs/stable/user_guide/torch_compiler/export.html#data-shape-dependent-control-flow)
3737
(for example, helper functions to trace both branches of if-else) and [dynamic
3838
shapes](https://pytorch.org/get-started/pytorch-2.0/#pytorch-2x-faster-more-pythonic-and-as-dynamic-as-ever)
3939
(for example, data dependent dynamic shape constraint).
4040

4141
### Export
4242

43-
To deploy the program to the device, engineers need to have a graph representation for compiling a model to run on various backends. With [`torch.export()`](https://pytorch.org/docs/main/export.html), an [EXIR](ir-exir.md) (export intermediate representation) is generated with ATen dialect. All AOT compilations are based on this EXIR, but can have multiple dialects along the lowering path as detailed below.
43+
To deploy the program to the device, engineers need to have a graph representation for compiling a model to run on various backends. With [`torch.export()`](https://docs.pytorch.org/docs/stable/user_guide/torch_compiler/export.html), an [EXIR](ir-exir.md) (export intermediate representation) is generated with ATen dialect. All AOT compilations are based on this EXIR, but can have multiple dialects along the lowering path as detailed below.
4444

4545
* _[ATen Dialect](ir-exir.md#aten-dialect)_. PyTorch Edge is based on PyTorch’s Tensor library ATen, which has clear contracts for efficient execution. ATen Dialect is a graph represented by ATen nodes which are fully ATen compliant. Custom operators are allowed, but must be registered with the dispatcher. It’s flatten with no module hierarchy (submodules in a bigger module), but the source code and module hierarchy are preserved in the metadata. This representation is also autograd safe.
4646
* Optionally, _quantization_, either QAT (quantization-aware training) or PTQ (post training quantization) can be applied to the whole ATen graph before converting to Core ATen. Quantization helps with reducing the model size, which is important for edge devices.

0 commit comments

Comments
 (0)