Welcome
RunDiffusion/ RunDiffusion XL beta
API Sample: RunDiffusion/RunDiffusion XL beta
You don't have any projects yet. To be able to use our api service effectively, please sign in/up and create a project.
Get your api keyPrepare Authentication Signature
//Sign up Wiro dashboard and create project
export YOUR_API_KEY="{{useSelectedProjectAPIKey}}";
export YOUR_API_SECRET="XXXXXXXXX";
//unix time or any random integer value
export NONCE=$(date +%s);
//hmac-SHA256 (YOUR_API_SECRET+Nonce) with YOUR_API_KEY
export SIGNATURE="$(echo -n "${YOUR_API_SECRET}${NONCE}" | openssl dgst -sha256 -hmac "${YOUR_API_KEY}")";
Create a New Folder - Make HTTP Post Request
Create a New Folder - Response
Upload a File to the Folder - Make HTTP Post Request
Upload a File to the Folder - Response
Run Command - Make HTTP Post Request
curl -X POST "{{apiUrl}}/Run/{{toolSlugOwner}}/{{toolSlugProject}}" \
-H "Content-Type: {{contentType}}" \
-H "x-api-key: ${YOUR_API_KEY}" \
-H "x-nonce: ${NONCE}" \
-H "x-signature: ${SIGNATURE}" \
-d '{{toolParameters}}';
Run Command - Response
//response body
{
"errors": [],
"taskid": "2221",
"socketaccesstoken": "eDcCm5yyUfIvMFspTwww49OUfgXkQt",
"result": true
}
Get Task Detail - Make HTTP Post Request
curl -X POST "{{apiUrl}}/Task/Detail" \
-H "Content-Type: {{contentType}}" \
-H "x-api-key: ${YOUR_API_KEY}" \
-H "x-nonce: ${NONCE}" \
-H "x-signature: ${SIGNATURE}" \
-d '{
"tasktoken": 'eDcCm5yyUfIvMFspTwww49OUfgXkQt',
}';
Get Task Detail - Response
//response body
{
"total": "1",
"errors": [],
"tasklist": [
{
"id": "2221",
"uuid": "15bce51f-442f-4f44-a71d-13c6374a62bd",
"name": "",
"socketaccesstoken": "eDcCm5yyUfIvMFspTwww49OUfgXkQt",
"parameters": {
"inputImage": "https://api.wiro.ai/v1/File/mCmUXgZLG1FNjjjwmbtPFr2LVJA112/inputImage-6060136.png"
},
"debugoutput": "",
"debugerror": "",
"starttime": "1734513809",
"endtime": "1734513813",
"elapsedseconds": "6.0000",
"status": "task_postprocess_end",
"cps": "0.000585000000",
"totalcost": "0.003510000000",
"guestid": null,
"projectid": "699",
"modelid": "598",
"description": "",
"basemodelid": "0",
"runtype": "model",
"modelfolderid": "",
"modelfileid": "",
"callbackurl": "",
"marketplaceid": null,
"createtime": "1734513807",
"canceltime": "0",
"assigntime": "1734513807",
"accepttime": "1734513807",
"preprocessstarttime": "1734513807",
"preprocessendtime": "1734513807",
"postprocessstarttime": "1734513813",
"postprocessendtime": "1734513814",
"pexit": "0",
"categories": "["tool","image-to-image","quick-showcase","compare-landscape"]",
"outputs": [
{
"id": "6bc392c93856dfce3a7d1b4261e15af3",
"name": "0.png",
"contenttype": "image/png",
"parentid": "6c1833f39da71e6175bf292b18779baf",
"uuid": "15bce51f-442f-4f44-a71d-13c6374a62bd",
"size": "202472",
"addedtime": "1734513812",
"modifiedtime": "1734513812",
"accesskey": "dFKlMApaSgMeHKsJyaDeKrefcHahUK",
"foldercount": "0",
"filecount": "0",
"ispublic": 0,
"expiretime": null,
"url": "https://cdn1.wiro.ai/6a6af820-c5050aee-40bd7b83-a2e186c6-7f61f7da-3894e49c-fc0eeb66-9b500fe2/0.png"
}
],
"size": "202472"
}
],
"result": true
}
Get Task Process Information and Results with Socket Connection
<script type="text/javascript">
window.addEventListener('load',function() {
//Get socketAccessToken from task run response
var SocketAccessToken = 'eDcCm5yyUfIvMFspTwww49OUfgXkQt';
WebSocketConnect(SocketAccessToken);
});
//Connect socket with connection id and register task socket token
async function WebSocketConnect(accessTokenFromAPI) {
if ("WebSocket" in window) {
var ws = new WebSocket("wss://socket.wiro.ai/v1");
ws.onopen = function() {
//Register task socket token which has been obtained from task run API response
ws.send('{"type": "task_info", "tasktoken": "' + accessTokenFromAPI + '"}');
};
ws.onmessage = function (evt) {
var msg = evt.data;
try {
var debugHtml = document.getElementById('debug');
debugHtml.innerHTML = debugHtml.innerHTML + "\n" + msg;
var msgJSON = JSON.parse(msg);
console.log('msgJSON: ', msgJSON);
if(msgJSON.type != undefined)
{
console.log('msgJSON.target: ',msgJSON.target);
switch(msgJSON.type) {
case 'task_queue':
console.log('Your task has been waiting in the queue.');
break;
case 'task_accept':
console.log('Your task has been accepted by the worker.');
break;
case 'task_preprocess_start':
console.log('Your task preprocess has been started.');
break;
case 'task_preprocess_end':
console.log('Your task preprocess has been ended.');
break;
case 'task_assign':
console.log('Your task has been assigned GPU and waiting in the queue.');
break;
case 'task_start':
console.log('Your task has been started.');
break;
case 'task_output':
console.log('Your task has been started and printing output log.');
console.log('Log: ', msgJSON.message);
break;
case 'task_error':
console.log('Your task has been started and printing error log.');
console.log('Log: ', msgJSON.message);
break;
case 'task_output_full':
console.log('Your task has been completed and printing full output log.');
break;
case 'task_error_full':
console.log('Your task has been completed and printing full error log.');
break;
case 'task_end':
console.log('Your task has been completed.');
break;
case 'task_postprocess_start':
console.log('Your task postprocess has been started.');
break;
case 'task_postprocess_end':
console.log('Your task postprocess has been completed.');
console.log('Outputs: ', msgJSON.message);
//output files will add ui
msgJSON.message.forEach(function(currentValue, index, arr){
console.log(currentValue);
var filesHtml = document.getElementById('files');
filesHtml.innerHTML = filesHtml.innerHTML + '<img src="' + currentValue.url + '" style="height:300px;">'
});
break;
}
}
} catch (e) {
console.log('e: ', e);
console.log('msg: ', msg);
}
};
ws.onclose = function() {
alert("Connection is closed...");
};
} else {
alert("WebSocket NOT supported by your Browser!");
}
}
</script>
Prepare UI Elements Inside Body Tag
<div id="files"></div>
<pre id="debug"></pre>
Introducing RunDiffusion XL Beta!
So excited to get this out! It's production ready with a few quirks, but hey, it's only been a week right!?
Join our 12k Member Discord where we help you with your projects, talk about best practices, post exciting updates and news, and share the amazing art we generate.
Why? Isn't SDXL already amazing?
Yes! We feel this is a step up! SDXL has an issue with people still looking plastic, eyes, hands, and extra limbs. We couldn't solve all the problem (hence the beta), but we're close! We tested hundreds of SDXL prompts straight from Civitai.com (using ComfyUI) to make sure the pipelines were identical and found that this model did produce better images! See for yourself here.
The model tends to favor closeups or "headshots" more than full body unless you explicitly prompt for that. Not sure if that is a desired feature. Open for feedback on that.
Strengths
More photorealistic for animals, cars, and people.
Different Model, different generations. Try it with your favorite prompt!
Better portraits. SDXL likes to put the subject way far out. This brings things closer.
Eyes and hands are a bit better (from our testing)
Better prompt control. Make sure to use commas and break apart subjects and styles!
Weaknesses
Still a few concepts that are "plastic" (Food, but please test your prompts and let us know!)
Can sometimes be too "person bias"
Creativity is a little bit hindered. This could be considered a "Photorealistic" model. But again, TEST!
Bias towards "sports cars" lol
Notice the RDXL follows the prompt a bit better. When asking for "upper body" it tends to follow that.
How to Use
Prompt as usual. Use commas to separate subjects and styles. If you aren't getting what you want, use the token again. e.g.
iron man made of lava, suit of armor infused with molten lava, ...etc
if you don't get Iron Man in your generation, then put that token somewhere else, `iron man made of lava, suit of armor infused with molten lava, marvel iron man super hero ...etc
`Use the standard refiner or the 1.0-0.9 refiner. https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0_0.9vae.safetensors
Use ComfyUI
Our Process
We used a small dataset of 700 images and trained for 14k steps at 20 epochs. Manually captioning everything to be as descriptive as possible. It turns out that SDXL is pretty receptive to training! The changes we were able to make prove to be extremely exciting. The future is bright for SDXL!
Future Plans
We want to make generations easy, so we're bringing ComfyUI to the platform so you can simply drop in images with metadata to generate with one click. This is the future of workflow sharing and we're very grateful to /u/comfyanonymous for his help with getting this working on our platform. Also a special thanks to /u/Searge for his amazing work on SeargeSDXL. We love that workflow! (Could we work on getting the generation data to work on Civitai.com?)
We will continue to work on this model from the feedback we receive. Get ready for a full release in the coming weeks.
Tools
View All
