Guest



Sign inSignup
  • Home
  • Dashboard
  • Tools
  • Pricing

Welcome

HomeDashboardTools
Use cases
Human Resources
Retail & E-commerce
Interior Design
Fashion AI
Creative Content Solutions
Sports & Fitness
GenAI Video Tools
PricingDocumentation
Guest



Sign inSignup

Task History

  • Runnings
  • Models
  • Trains

You don't have task yet.

Go to Tools

Welcome

  • Tools
  • Remade-AI/Squish
Tools
Task History

Remade-AI/ Squish

1136runs
0Comments

API Sample: Remade-AI/Squish

You don't have any projects yet. To be able to use our api service effectively, please sign in/up and create a project.

Get your api key
  • curl
  • nodejs
  • csharp
  • php
  • swift
  • dart
  • kotlin
  • go
  • python

Prepare Authentication Signature

                          
  //Sign up Wiro dashboard and create project
  export YOUR_API_KEY="{{useSelectedProjectAPIKey}}"; 
  export YOUR_API_SECRET="XXXXXXXXX"; 

   //unix time or any random integer value
  export NONCE=$(date +%s);

  //hmac-SHA256 (YOUR_API_SECRET+Nonce) with YOUR_API_KEY
  export SIGNATURE="$(echo -n "${YOUR_API_SECRET}${NONCE}" | openssl dgst -sha256 -hmac "${YOUR_API_KEY}")";
      
                        

Create a New Folder - Make HTTP Post Request

Create a New Folder - Response

Upload a File to the Folder - Make HTTP Post Request

Upload a File to the Folder - Response

Run Command - Make HTTP Post Request

                          
  curl -X POST "{{apiUrl}}/Run/{{toolSlugOwner}}/{{toolSlugProject}}"  \
  -H "Content-Type: {{contentType}}" \
  -H "x-api-key: ${YOUR_API_KEY}" \
  -H "x-nonce: ${NONCE}" \
  -H "x-signature: ${SIGNATURE}" \
  -d '{{toolParameters}}';

      
                        

Run Command - Response

                          
  //response body
  {
      "errors": [],
      "taskid": "2221",
      "socketaccesstoken": "eDcCm5yyUfIvMFspTwww49OUfgXkQt",
      "result": true
  }
      
                        

Get Task Detail - Make HTTP Post Request

                          
  curl -X POST "{{apiUrl}}/Task/Detail"  \
  -H "Content-Type: {{contentType}}" \
  -H "x-api-key: ${YOUR_API_KEY}" \
  -H "x-nonce: ${NONCE}" \
  -H "x-signature: ${SIGNATURE}" \
  -d '{
    "tasktoken": 'eDcCm5yyUfIvMFspTwww49OUfgXkQt',
  }';

      
                        

Get Task Detail - Response

                          
  //response body
  {
    "total": "1",
    "errors": [],
    "tasklist": [
        {
            "id": "2221",
            "uuid": "15bce51f-442f-4f44-a71d-13c6374a62bd",
            "name": "",
            "socketaccesstoken": "eDcCm5yyUfIvMFspTwww49OUfgXkQt",
            "parameters": {
                "inputImage": "https://api.wiro.ai/v1/File/mCmUXgZLG1FNjjjwmbtPFr2LVJA112/inputImage-6060136.png"
            },
            "debugoutput": "",
            "debugerror": "",
            "starttime": "1734513809",
            "endtime": "1734513813",
            "elapsedseconds": "6.0000",
            "status": "task_postprocess_end",
            "cps": "0.000585000000",
            "totalcost": "0.003510000000",
            "guestid": null,
            "projectid": "699",
            "modelid": "598",
            "description": "",
            "basemodelid": "0",
            "runtype": "model",
            "modelfolderid": "",
            "modelfileid": "",
            "callbackurl": "",
            "marketplaceid": null,
            "createtime": "1734513807",
            "canceltime": "0",
            "assigntime": "1734513807",
            "accepttime": "1734513807",
            "preprocessstarttime": "1734513807",
            "preprocessendtime": "1734513807",
            "postprocessstarttime": "1734513813",
            "postprocessendtime": "1734513814",
            "pexit": "0",
            "categories": "["tool","image-to-image","quick-showcase","compare-landscape"]",
            "outputs": [
                {
                    "id": "6bc392c93856dfce3a7d1b4261e15af3",
                    "name": "0.png",
                    "contenttype": "image/png",
                    "parentid": "6c1833f39da71e6175bf292b18779baf",
                    "uuid": "15bce51f-442f-4f44-a71d-13c6374a62bd",
                    "size": "202472",
                    "addedtime": "1734513812",
                    "modifiedtime": "1734513812",
                    "accesskey": "dFKlMApaSgMeHKsJyaDeKrefcHahUK",
                    "foldercount": "0",
                    "filecount": "0",
                    "ispublic": 0,
                    "expiretime": null,
                    "url": "https://cdn1.wiro.ai/6a6af820-c5050aee-40bd7b83-a2e186c6-7f61f7da-3894e49c-fc0eeb66-9b500fe2/0.png"
                }
            ],
            "size": "202472"
        }
    ],
    "result": true
  }
      
                        

Get Task Process Information and Results with Socket Connection

                          
<script type="text/javascript">
  window.addEventListener('load',function() {
    //Get socketAccessToken from task run response
    var SocketAccessToken = 'eDcCm5yyUfIvMFspTwww49OUfgXkQt';
    WebSocketConnect(SocketAccessToken);
  });

  //Connect socket with connection id and register task socket token
  async function WebSocketConnect(accessTokenFromAPI) {
    if ("WebSocket" in window) {
        var ws = new WebSocket("wss://socket.wiro.ai/v1");
        ws.onopen = function() {  
          //Register task socket token which has been obtained from task run API response
          ws.send('{"type": "task_info", "tasktoken": "' + accessTokenFromAPI + '"}'); 
        };

        ws.onmessage = function (evt) { 
          var msg = evt.data;

          try {
              var debugHtml = document.getElementById('debug');
              debugHtml.innerHTML = debugHtml.innerHTML + "\n" + msg;

              var msgJSON = JSON.parse(msg);
              console.log('msgJSON: ', msgJSON);

              if(msgJSON.type != undefined)
              {
                console.log('msgJSON.target: ',msgJSON.target);
                switch(msgJSON.type) {
                    case 'task_queue':
                      console.log('Your task has been waiting in the queue.');
                    break;
                    case 'task_accept':
                      console.log('Your task has been accepted by the worker.');
                    break;
                    case 'task_preprocess_start':
                      console.log('Your task preprocess has been started.');
                    break;
                    case 'task_preprocess_end':
                      console.log('Your task preprocess has been ended.');
                    break;
                    case 'task_assign':
                      console.log('Your task has been assigned GPU and waiting in the queue.');
                    break;
                    case 'task_start':
                      console.log('Your task has been started.');
                    break;
                    case 'task_output':
                      console.log('Your task has been started and printing output log.');
                      console.log('Log: ', msgJSON.message);
                    break;
                    case 'task_error':
                      console.log('Your task has been started and printing error log.');
                      console.log('Log: ', msgJSON.message);
                    break;
                   case 'task_output_full':
                      console.log('Your task has been completed and printing full output log.');
                    break;
                    case 'task_error_full':
                      console.log('Your task has been completed and printing full error log.');
                    break;
                    case 'task_end':
                      console.log('Your task has been completed.');
                    break;
                    case 'task_postprocess_start':
                      console.log('Your task postprocess has been started.');
                    break;
                    case 'task_postprocess_end':
                      console.log('Your task postprocess has been completed.');
                      console.log('Outputs: ', msgJSON.message);
                      //output files will add ui
                      msgJSON.message.forEach(function(currentValue, index, arr){
                          console.log(currentValue);
                          var filesHtml = document.getElementById('files');
                          filesHtml.innerHTML = filesHtml.innerHTML + '<img src="' + currentValue.url + '" style="height:300px;">'
                      });
                    break;
                }
              }
          } catch (e) {
            console.log('e: ', e);
            console.log('msg: ', msg);
          }
        };

        ws.onclose = function() { 
          alert("Connection is closed..."); 
        };
    } else {              
        alert("WebSocket NOT supported by your Browser!");
    }
  }
</script>
      
                        

Prepare UI Elements Inside Body Tag

                          
  <div id="files"></div>
  <pre id="debug"></pre>
      
                        

Choose an image that will re-generate

Choose an image URL that will re-generate

Trigger Word(s):

Tell us about any details you want to generate

Squish scale:

negative prompt

Your request will cost $0.00095 per second.

(Total cost varies depending on the request’s execution time.)
Remade-AI-Squish-sample-1.mp4
Remade-AI-Squish-sample-2.mp4
Remade-AI-Squish-sample-3.mp4
Remade-AI-Squish-sample-4.mp4
Remade-AI-Squish-sample-5.mp4
1744876485 Report This Model


Squish Effect LoRA for Wan2.1 14B I2V 480p




Overview


This LoRA is trained on the Wan2.1 14B I2V 480p model and allows you to squish any object in an image. The effect works on a wide variety of objects, from animals to vehicles to people!





Features



  • Transform any image into a video of it being squished

  • Trained on the Wan2.1 14B 480p I2V base model

  • Consistent results across different object types

  • Simple prompt structure that's easy to adapt





Community



  • Discord: Join our community to generate videos with this LoRA for free

  • Request LoRAs: We're training and open-sourcing Wan2.1 LoRAs for free - join our Discord to make requests!






Prompt

In the video, a miniature dog is presented. The dog is held in a person's hands. The person then presses on the dog, causing a sq41sh squish effect. The person keeps pressing down on the dog, further showing the sq41sh squish effect.




Prompt

In the video, a miniature tank is presented. The tank is held in a person's hands. The person then presses on the tank, causing a sq41sh squish effect. The person keeps pressing down on the tank, further showing the sq41sh squish effect.




Prompt

In the video, a miniature balloon is presented. The balloon is held in a person's hands. The person then presses on the balloon, causing a sq41sh squish effect. The person keeps pressing down on the balloon, further showing the sq41sh squish effect.




Prompt

In the video, a miniature rodent is presented. The rodent is held in a person's hands. The person then presses on the rodent, causing a sq41sh squish effect. The person keeps pressing down on the rodent, further showing the sq41sh squish effect.




Prompt

In the video, a miniature person is presented. The person is held in a person's hands. The person then presses on the person, causing a sq41sh squish effect. The person keeps pressing down on the person, further showing the sq41sh squish effect.














Model File and Inference Workflow







📥 Download Links:



  • squish_18.safetensors - LoRA Model File

  • wan_img2video_lora_workflow.json - Wan I2V with LoRA Workflow for ComfyUI







Using with Diffusers


pip install git+https://github.com/huggingface/diffusers.git

import torch
from diffusers.utils import export_to_video, load_image
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from transformers import CLIPVisionModel
import numpy as np

model_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
pipe.to("cuda")

pipe.load_lora_weights("Remade/Squish")

pipe.enable_model_cpu_offload() #for low-vram environments

prompt = "In the video, a miniature cat toy is presented. The cat toy is held in a person's hands. The person then presses on the cat toy, causing a sq41sh squish effect. The person keeps pressing down on the cat toy, further showing the sq41sh squish effect."

image = load_image("https://huggingface.co/datasets/diffusers/cat_toy_example/resolve/main/1.jpeg")

max_area = 480 * 832
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
image = image.resize((width, height))

output = pipe(
image=image,
prompt=prompt,
height=height,
width=width,
num_frames=81,
guidance_scale=5.0,
num_inference_steps=28
).frames[0]
export_to_video(output, "output.mp4", fps=16)





Recommended Settings



  • LoRA Strength: 1.0

  • Embedded Guidance Scale: 6.0

  • Flow Shift: 5.0





Trigger Words


The key trigger phrase is: sq41sh squish effect





Prompt Template


For best results, use this prompt structure:



In the video, a miniature [object] is presented. The [object] is held in a person's hands. The person then presses on the [object], causing a sq41sh squish effect. The person keeps pressing down on the [object], further showing the sq41sh squish effect.

Simply replace [object] with whatever you want to see squished!





ComfyUI Workflow


This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.



See the Downloads section above for the modified workflow.







Model Information


The model weights are available in Safetensors format. See the Downloads section above.





Training Details



  • Base Model: Wan2.1 14B I2V 480p

  • Training Data: 1.5 minutes of video (20 short clips of things being squished)

  • Epochs: 18





Additional Information


Training was done using Diffusion Pipe for Training





Acknowledgments


Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!



Tools

View All

We couldn't find any matching results.

AndroidXL/Kamehameha-Energy-Beam

Wan2.1 Kamehameha Energy Beam Effect LoRA
Run time: 1 second
1070 runs
0

wiro/kissing-with-two-image

Wan2.1 Kissing with two image Effect LoRA
Run time: 1 second
2336 runs
0

JayHuang_AIGC/Joker-Effect

Wan2.1 Joker Effect LoRA
Run time: 1 second
0 runs
0

Aiwood/Fluffy-Hair

Fluffy Hair
Run time: 1 second
0 runs
0

Remade/Decay-Effect

Wan2.1 Decat Effect LoRA
Run time: 1 second
0 runs
0

Aseer/Walking-to-Viewer

Wan2.1 Walking to Viewer Effect LoRA
Run time: 1 second
0 runs
0

Select Language

Logo of nvidia programLogo of nvidia program
Wiro AI brings machine learning easily accessible to all in the cloud.
  • WIRO
  • About
  • Careers
  • Contact
  • Language Language
  • Product
  • Tools
  • Pricing
  • Roadmap
  • Changelog
  • Status
  • Documentation
  • Introduction
  • Start Your First Project
  • Example Projects

2023 © Wiro.ai | Terms of Service & Privacy Policy