GREETINGS ALL.
We recently opted to move all our images inside our database rather than externally hosting them on our own server. The reasons for doing this are threefold.
- Simplicity of maintaining, add, removing etc and
- Utilise the huge amount of unused space we have on our knack account
- Removing the possibility of our server being down and therefore ending up with broken image links.
None of those reasons are earth shattering however in the first instance, I am the sole “computer literate” (some might say illiterate) person in the research team with any semblance of a programming background albeit in long passed languages.
So trying to stup ways for people to add images to our server and then add an external url link to the database was fraught with problems, physically and mentally.
So there are some prices you pay in doing this, and that is the EXPORT function of Knack ONLY GIVES YOU THE URL address of the image and not the image itself.
In our case we have around 37-38,000 images. It is afterall a database about WW2 Servic personnel all of whom have now passed away.
We do also operate a digital museum concept where images etc are the primary source of content so anything we store on the database had a secondary use. The software for this does not support links only a direct read of the file location in which the software resides.
So a backup of the images is needed which can also be used for the digital museum.
With KNACK unable to provide something although I believe flows may be able to be set up but frankly I couldnt work that out for my situation, and a search of the forum revealed quite a few had the same question - HOW DO I BACK THEM UP LOCALLY/CLOUD
I believe I have found a great little solution with a piece of software freely available that shows it dates back as far as Windows 7. It is a website scrape system, designed to steal images and contents from websites BUT WORKS PERFECTLY with a provided list of image specific URL as provided by KNACK in its CSV export function.
The product is called OCTOPARSE and can be downloaded at https://www.octoparse.com/
I am using the free trial which limits the number of concurrent scrapes happening but it has worked perfectly with alphabetical lists in spreadsheets. You can nominate to keep the original filename (it uses hashing for std naming but of no value to me), what folder to store it in and also setup how often it happens. For us this will likely be once a month after the initial scrapes are done. There are a lot of options can be set and templates, and I think I saw a Zapier interface as well if you have the skills to use that to control it.
The paid plans allow you to specify external cloud destination rather than local drives, but I have this covered by locating the download directory as part of our dropbox account so the upload to the cloud happens as a behind the scenes task. It also allows you to tell it to ignore duplicates to cut down the amount being downloaded.
It is NOT the fastest thing since sliced bread but it works and is exactly what I needed !
An external copy of all the images in our database.
Once you get the hang of it it is simple to setup and operate.
Image 1 - Specify the source spreadsheet or nominate the image files one by one
Image 2 - Create the task setting any parameters
Image 3 - Watch it work its magic
Any questions don’t hesitate to ask


