Compare commits

...

3 Commits

Author SHA1 Message Date
089b14e6c9
docs/files.rst: finish chunked upload section 2023-05-20 23:35:28 -04:00
4a80d4097e
Merge branch 'main' into wip-apidocs 2023-05-20 23:15:41 -04:00
a8f22fb741
sachet/server/files/views.py: fix db integrity error
i'll be honest i have no idea what just happened
2023-05-20 23:06:01 -04:00
2 changed files with 36 additions and 11 deletions

View File

@ -132,17 +132,44 @@ To allow for uploading large files reliably, Sachet requires that you upload fil
Partial uploads do not affect the state of the share;
a new file exists only once all chunks are uploaded.
Chunks are ordered by their index.
Once an upload finishes, they are combined in that order to form the new file.
The server will respond with ``200 OK`` when chunks are sent.
When the final chunk is sent, and the upload is completed,
the server will instead respond with ``201 Created``.
Every chunk has the following schema:
.. _files_chunk_schema :
.. code-block:: json
.. code-block::
{
"dztotalchunks": 3,
"dzchunkindex": 2,
"dzuuid": "unique_id"
}
dztotalchunks = 3
dzchunkindex = 2
dzuuid = "unique_id"
upload = <binary data>
..
TODO...
.. note::
This data is sent via a ``multipart/form-data`` request; it's not JSON.
.. list-table::
:header-rows: 1
:widths: 25 25 50
* - Property
- Type
- Description
* - ``dztotalchunks``
- Integer
- Total number of chunks the client will send.
* - ``dzchunkindex``
- Integer
- Number of the chunk being sent.
* - ``dzuuid``
- String
- ID which is the same for all chunks in a single upload.
* - ``upload``
- Binary data (file)
- Data contained in this chunk.

View File

@ -111,9 +111,7 @@ class FileContentAPI(MethodView):
if upload.completed:
share.initialized = True
# really convoluted
# but otherwise it doesn't cascade deletes?
Upload.query.filter(Upload.upload_id == upload.upload_id).delete()
db.session.delete(upload)
db.session.commit()
return jsonify(dict(status="success", message="Upload completed.")), 201
else: