project screenshot 1
project screenshot 2
project screenshot 3
project screenshot 4

Shale

Shale is aiming to bring cloud computing to Filecoin and make Storage Providers (SPs) directly compete with AWS, Google Cloud, etc..

Shale

Created At

Hack FEVM

Project Description

With Shale, SPs are monetizing(e.g., earning FIL) their computing devices (CPU/RAM/GPU servers) by leasing the computing resource together with their sealed data.

Shale is targeting on native high-performance computing such as AI/ML training/inference, usually against large/open datasets. Any traditional tasks are possible, such as C/C++ compiling, short-time web serving, general data-processing etc..

How it's Made

The idea is sending programs around to where the data locate (i.e. the data center of SP), and utilizing those Filecoin+ deals by disk mouting.

Programs here refer to off-chain native executables like C/C++, CUDA, TensorFlow.

SSH-payment-tunnel

Just like normal SSH, but equipped with a local cryptographic payment tunnel, which accumulates fractional payments and aggregates them to ZK proofs in the end. The ZK proofs will be posted to Shale layer-2 by SPs.

Both parties could terminate the lease anytime.

The design is to accommodate future plugins for in-place monitoring/benchmark of session quality to catch potential cheating behaviors.

Shale Layer-2 Marketplace

There will be smart contracts on FVM for coordinating orders between clients and SPs, where:

Clients create orders with their SSH pub-keys, prices, time and resource requirements SPs accept orders, confirming by clients with collateral SPs upload payment-tunnel ZKPs to claim rewards Clients post ratings/reviews based on their time/usage. High-performance native computing Programs are running natively in container with access to local accelerators (GPU).

All traditional web1/web2 software ecosystems (unix, apt-get, gcc, python, nodejs, etc.) become available, thanks to container technology.

Examples: C/C++ compiling, ML training/inference, general data pipeline/processing, short-time serving etc..

Implementation

The first implementation is a terminal command-line tool (CLI) that focused on demonstrating the using experiences, which includes:

Client lists SPs with available datasets Client sends request to particular SP SP spawns docker containers on server with required accelerators(e.g., GPUs) SP auto-provisions SSH server and client's pub-key inside container Mock SSH-payment-tunnel UI Client runs a ML training task remotely in the container

background image mobile

Join the mailing list

Get the latest news and updates