Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/styles/Vocab/ipfs-docs-vocab/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ homebrew
hostname
HTML
HTTPS
identafiability
identifiability
Infura
interop
ipget
Expand Down
18 changes: 17 additions & 1 deletion .github/styles/pln-ignore.txt
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ Caddyfile
callout
callouts
cas
cdn('s)
CDN's
cdns
certbot
cid
Expand All @@ -55,6 +57,7 @@ crowdsourcing
crypto(currencies)
daos
dapps
dClimate
data('s)
datastore
deduplicate
Expand All @@ -68,6 +71,8 @@ deserialized
devs
dheeraj
dht
dht('s)
DHT's
dhts
dialable
dialback
Expand Down Expand Up @@ -95,6 +100,8 @@ filestore
flatfs
flatf[ss]
fleek
Fleek's
fleek('s)
fqdns
gasless
geospatial
Expand All @@ -114,7 +121,7 @@ hostname
hostnames
html
https
identafiability
identifiability
infura
interop
ipfs
Expand All @@ -136,6 +143,7 @@ keypair
keystores
kubo
Kubo's
Lakhani
kubuxu
laika
lan
Expand Down Expand Up @@ -189,6 +197,8 @@ nats
neocities
netlify
next.js
nft('s)
NFT's
nfts
nginx
nodejs
Expand All @@ -208,6 +218,7 @@ pluggable
powergate
powershell
preload
prenegotiated
prepended
processannounce
protobuf
Expand Down Expand Up @@ -238,6 +249,7 @@ sandboxed
satoshi
satoshi nakamoto
SDKs
se
serverless
sharding
snapshotted
Expand All @@ -259,6 +271,7 @@ takedown
testground
testnet
toolkits
toolset
trustlessly
trustlessness
uncensorable
Expand All @@ -279,6 +292,7 @@ vue
Vuepress
wantlist
wantlists
WASM
web
webpack
webpages
Expand All @@ -298,4 +312,6 @@ youtube
IPFS's
IPIPs
IPIP
Zeeshan
Zelenka
_redirects
1 change: 1 addition & 0 deletions docs/.vuepress/redirects
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@
/how-to/run-ipfs-inside-docker /install/run-ipfs-inside-docker
/how-to/ipfs-updater /install/command-line
/how-to/websites-on-ipfs/link-a-domain /how-to/websites-on-ipfs/custom-domains
/how-to/websites-on-ipfs/introducing-fleek /how-to/websites-on-ipfs/static-site-generators
/how-to/gateway-troubleshooting /how-to/troubleshooting
/install/command-line-quick-start/ /how-to/command-line-quick-start
/install/js-ipfs/ https://github.com/ipfs/helia/wiki
Expand Down
4 changes: 2 additions & 2 deletions docs/case-studies/arbol.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ _— Ben Andre, CTO, Arbol_
<img src="./images/logo-arbol.svg" alt="Arbol logo" width="220">
:::

[Arbol](https://www.arbolmarket.com/) is a software platform that connects agricultural entities like farmers and other weather-dependent parties with investors and other capital providers to insure and protect against weather-related risks. Arbol's platform sells contracts for parametric weather protection agreements in a marketplace that's an innovative, data-driven approach to risk management, cutting out the usual legacy insurance claims process of making loss assessments on the ground. Instead, Arbol relies on tamper-proof data indexes to determine payouts, and doesn't require a defined loss to be indemnified. Arbol's platform combines parametric weather protection with blockchain-based smart contracts to provide cost-efficient, automated, and user-defined weather-related risk hedging. As with traditional crop insurance and similar legacy products, end users purchase assurance that they'll be financially protected in the case of adverse weather — but with Arbol, these end users are paid automatically if adverse conditions occur, as defined by the contract and measured by local meteorological observations tracked by Arbol's data sources.
[Arbol](https://www.arbol.io/) is a software platform that connects agricultural entities like farmers and other weather-dependent parties with investors and other capital providers to insure and protect against weather-related risks. Arbol's platform sells contracts for parametric weather protection agreements in a marketplace that's an innovative, data-driven approach to risk management, cutting out the usual legacy insurance claims process of making loss assessments on the ground. Instead, Arbol relies on tamper-proof data indexes to determine payouts, and doesn't require a defined loss to be indemnified. Arbol's platform combines parametric weather protection with blockchain-based smart contracts to provide cost-efficient, automated, and user-defined weather-related risk hedging. As with traditional crop insurance and similar legacy products, end users purchase assurance that they'll be financially protected in the case of adverse weather — but with Arbol, these end users are paid automatically if adverse conditions occur, as defined by the contract and measured by local meteorological observations tracked by Arbol's data sources.

To build the data indexes that Arbol uses to handle its contracts, the team aggregates and standardizes billions of data files comprising decades of weather information from a wide range of reputable sources — all of which is stored on IPFS. IPFS is critical to Arbol's service model due to the inherent verifiability provided by its [content-addressed architecture](../concepts/content-addressing.md), as well as a decentralized data delivery model that facilitates Arbol's day-to-day aggregation, synchronization, and distribution of massive amounts of data.

Expand Down Expand Up @@ -88,7 +88,7 @@ Arbol's end users enjoy the "it just works" benefits of parametric protection, b

8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](../reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](../concepts/ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers.

9. **Garbage collection:** Some older Arbol datasets require [garbage collection](../concepts/glossary.md#garbage-collection) whenever new data is added, due to a legacy method of overwriting old hashes with new hashes. However, all of Arbol's newer datasets use an architecture where old hashes are preserved and new posts reference the previous post. This methodology creates a linked list of hashes, with each hash containing a reference to the previous hash. As the length of the list becomes computationally burdensome, the system consolidates intermediate nodes and adds a new route to the head, creating a [DAG (directed acyclic graph)](../concepts/merkle-dag.md) structure. Heads are always stored in a master [heads.json reference file](https://gateway.arbolmarket.com/climate/hashes/heads.json) located on Arbol's command server.
9. **Garbage collection:** Some older Arbol datasets require [garbage collection](../concepts/glossary.md#garbage-collection) whenever new data is added, due to a legacy method of overwriting old hashes with new hashes. However, all of Arbol's newer datasets use an architecture where old hashes are preserved and new posts reference the previous post. This methodology creates a linked list of hashes, with each hash containing a reference to the previous hash. As the length of the list becomes computationally burdensome, the system consolidates intermediate nodes and adds a new route to the head, creating a [DAG (directed acyclic graph)](../concepts/merkle-dag.md) structure. Heads are always stored in a master [heads.json reference file](https://web.archive.org/web/20230318223234/https://gateway.arbolmarket.com/climate/hashes/heads.json) located on Arbol's command server.

### The tooling

Expand Down
4 changes: 4 additions & 0 deletions docs/case-studies/fleek.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ title: 'Case study: Fleek'
description: Explore some helpful use cases, ideas, and examples for the InterPlanetary File System (IPFS).
---

::: warning Fleek hosting discontinued
Fleek's IPFS hosting service was discontinued on January 31st, 2026. This case study is preserved for historical purposes.
:::

# Case study: Fleek

::: callout
Expand Down
1 change: 0 additions & 1 deletion docs/concepts/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ We're adding more documentation all the time and making ongoing revisions to exi

- [Case study: Arbol](../case-studies/arbol.md)
- [Case study: Audius](../case-studies/audius.md)
- [Case study: Fleek](../case-studies/fleek.md)
- [Case study: LikeCoin](../case-studies/likecoin.md)
- [Case study: Morpheus.Network](../case-studies/morpheus.md)
- [Case study: Snapshot](../case-studies/snapshot.md)
Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/cod.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ IPFS users can perform CoD on IPFS data with the [Bacalhau platform](#bacalhau)

## Bacalhau

Bacalhau is a platform for fast, cost-efficient, secure, distributed computation. Bacalhau works by running jobs where the data is generated and stored, also referred to as Compute Over Data (or CoD). Using Bacalhau, you can streamline existing workflows without extensive refactoring by running arbitrary Docker containers and WebAssembly (Wasm) images as compute tasks. The name _Bacalhau_ was coined from the Portuguese word for "salted cod fish".
Bacalhau is a platform for fast, cost-efficient, secure, distributed computation. Bacalhau works by running jobs where the data is generated and stored, also referred to as Compute Over Data (or CoD). Using Bacalhau, you can streamline existing workflows without extensive refactoring by running arbitrary Docker containers and WebAssembly (WASM) images as compute tasks. The name _Bacalhau_ was coined from the Portuguese word for "salted cod fish".

### Features

Expand All @@ -25,7 +25,7 @@ Bacalhau can:
- Run against data [mounted anywhere](https://docs.bacalhau.org/#how-it-works) on your machine.
- Integrate with services running on nodes to run jobs, such as [DuckDB](https://docs.bacalhau.org/examples/data-engineering/DuckDB/).
- Operate at scale over parallel jobs and batch process petabytes of data.
- Auto-generate art using a [Stable Diffusion AI model](https://www.waterlily.ai/) trained on the chosen artist’s original works.
- Auto-generate art using a [Stable Diffusion AI model](https://web.archive.org/web/20250313163631/https://www.waterlily.ai/) trained on the chosen artist’s original works.

### More Bacalhau resources

Expand All @@ -36,7 +36,7 @@ Bacalhau can:

The InterPlanetary Virtual Machine (IPVM) specification defines the easiest, fastest, most secure, and open way to run decentralized compute jobs on IPFS. One way to describe IPVM would be as "an open, decentralized, and local-first competitor to AWS Lambda".

IPVM uses [WebAssembly (Wasm)](https://webassembly.org/), content addressing, [simple public key infrastructure (SPKI)](https://en.wikipedia.org/wiki/Simple_public-key_infrastructure), and object capabilities to liberate computation from specific, prenegotiated services, such as large cloud computing providers. By default, execution scales flexibly on-device, all the way up to edge points-of-presence (PoPs) and data centers.
IPVM uses [WebAssembly (WASM)](https://webassembly.org/), content addressing, [simple public key infrastructure (SPKI)](https://en.wikipedia.org/wiki/Simple_public-key_infrastructure), and object capabilities to liberate computation from specific, prenegotiated services, such as large cloud computing providers. By default, execution scales flexibly on-device, all the way up to edge points-of-presence (PoPs) and data centers.

The core, Rust-based implementation and runtime of IPVM is the [Homestar project](https://github.com/ipvm-wg/homestar/). IPVM supports interoperability with [Bacalhau](https://bacalhau.org) and [Storacha (formerly web3.storage)](https://storacha.network/)

Expand Down
2 changes: 0 additions & 2 deletions docs/concepts/persistence.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,7 @@ Some of the pinning services listed below are operated by third party companies.

- [4EVERLAND Bucket](https://www.4everland.org/bucket/)
- [Filebase](https://filebase.com/)
- [NFT.Storage](https://nft.storage/)
- [Pinata](https://pinata.cloud/)
- [Scaleway](https://labs.scaleway.com/en/ipfs-pinning/)
- [Storacha (formerly web3.storage)](https://storacha.network/)

See how to [work with remote pinning services](../how-to/work-with-pinning-services.md).
Expand Down
3 changes: 0 additions & 3 deletions docs/concepts/privacy-and-encryption.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,6 @@ IPFS uses transport-encryption but not content encryption. This means that your
### Encryption-based projects using IPFS

- [Ceramic](https://ceramic.network/)
- [Fission.codes](https://fission.codes/)
- [Fleek](../case-studies/fleek.md)
- [Lit Protocol](https://litprotocol.com/)
- [OrbitDB](https://github.com/orbitdb)
- [Peergos](https://peergos.org/)
- [Textile](https://www.textile.io/)
4 changes: 1 addition & 3 deletions docs/how-to/best-practices-for-nft-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,9 +139,7 @@ When your data is stored on IPFS, users can fetch it from any IPFS node that has

If you're building a platform using IPFS for storage, it's important to pin your data to IPFS nodes that are robust and highly available, meaning that they can operate without significant downtime and with good performance. See our [server infrastructure documentation][docs-server-infra] to learn how [IPFS Cluster][ipfs-cluster] can help you manage your own cloud of IPFS nodes that coordinate to pin your platform's data and provide it to your users.

Alternatively, you can delegate the infrastructure responsibility to a remote pinning service. Remote pinning services like [Pinata](https://pinata.cloud) and [Eternum](https://www.eternum.io/) provide redundant, highly-available storage for your IPFS data, without any _vendor lock-in_. Because IPFS-based content is addressed by CID instead of location, you can switch between pinning services or migrate to your private infrastructure seamlessly as your platform grows.

You can also use a service from [Protocol Labs](https://protocol.ai) called [nft.storage](https://nft.storage) to get your data into IPFS, with long-term persistence backed by the decentralized [Filecoin](https://filecoin.io) storage network. To help foster the growth of the NFT ecosystem and preserve the new _digital commons_ of cultural artifacts that NFTs represent, [nft.storage](https://nft.storage) provides free storage and bandwidth for public NFT data. Sign up for a free account at [https://nft.storage](https://nft.storage) and try it out!
Alternatively, you can delegate the infrastructure responsibility to a remote pinning service. Remote pinning services like [Pinata](https://pinata.cloud), [Storacha](https://storacha.network/), and [Filebase](https://filebase.com/) provide redundant, highly-available storage for your IPFS data, without any _vendor lock-in_. Because IPFS-based content is addressed by CID instead of location, you can switch between pinning services or migrate to your private infrastructure seamlessly as your platform grows.

To learn more about persistence and pinning, including how to work with remote pinning services, see our [overview of persistence, permanence, and pinning][docs-persistence].

Expand Down
3 changes: 1 addition & 2 deletions docs/how-to/websites-on-ipfs/custom-domains.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,5 @@ With this approach, users can access your website via a custom domain name, e.g.
To provide access to the app directly via the custom domain, you have the following options:

1. Self-host both the IPFS provider (e.g. [Kubo](https://github.com/ipfs/kubo)) and the IPFS HTTP gateway (e.g. [Kubo](https://github.com/ipfs/kubo)). Deploy an IPFS Gateway that supports DNSLink resolution and point the `CNAME`/`A` DNS record for your custom domain to it and update the `TXT` record on `_dnslink` subdomain to match CID of your website. [See the guide on setting up a DNSLink gateway](./dnslink-gateway.md) for more details.
2. Use a service like Fleek which encompasses both DNSLink and traditional web hosting (HTTP + TLS + CDN + [automatic DNSLink management](https://fleek.xyz/docs/platform/domains/#dnslink)).
3. Deploy the site to a web hosting service like [Cloudflare Pages](https://pages.cloudflare.com/) or [GitHub Pages](https://pages.github.com/) with a custom domain (pointing and configuring the `CNAME`/`A` record for your custom domain on the web hosting service), while managing the DNSLink `TXT` record on `_dnslink` subdomain separately, essentially getting the benefits of both IPFS and traditional web hosting. Remember to set up CI automation to update the DNSLink `TXT` record for every deployment that changes the CID.
2. Deploy the site to a web hosting service like [Cloudflare Pages](https://pages.cloudflare.com/) or [GitHub Pages](https://pages.github.com/) with a custom domain (pointing and configuring the `CNAME`/`A` record for your custom domain on the web hosting service), while managing the DNSLink `TXT` record on `_dnslink` subdomain separately, essentially getting the benefits of both IPFS and traditional web hosting. Remember to set up CI automation to update the DNSLink `TXT` record for every deployment that changes the CID.

Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading