由于Autodesk Forge是完全基于RESTful API框架的云平台,且暂时没有本地部署方案,特别是Viewer.js暂不支持本地搭建,必须外部引用位于服务器端的脚本,如何满足离线应用的需求一直是广大开发者关注的问题。文本将介绍来自Forge顾问团队的国际同事Petr原创的Viewer缓存范例,利用HTML5广泛用于PWA(Progessive Web App)开发的典型接口实现。时至今日,要把来自网络应用或服务的数据缓存至本地设备,我们有几种不同的技术与方式可选,本文将示范使用Service Worker,Cache和Channel Messaging等API实现,它们都是开发Progrssive Web App的常客。虽然这些API相对较为新锐,但已获得新版浏览器的广发支持,详细支持情况可以参考:当我们在JavaScript中注册了Service Worker之后,该Worker会拦截浏览器所有页面对于指定网络源或域的请求,返回缓存中的内容,Service Worker亦可以调用IndexDB、Channel Messaging、Push等API。Service Worker在Worker上下文中执行,无法直接对DOM进行操作,可以独立于页面的形式控制页面的加载。一个Service Worker可以控制多个页面,每当指定范围内的页面加载时,Service Worker便于会对其进行安装并操作,所以请小心使用全局变量,每个页面并没有自身独立的Worker。Service Worker的生命周期如下:在JavaScript中注册Service Worker浏览器下载并执行Worker脚本Worker收到“install”(安装)事件,一次性配置所需资源等待其他正在执行的Service Worker结束Worker收到“activate”(激活)事件,清除Worker的旧cacheWorker开始接受“fetch”(拦截网络请求并返回缓存中的资源)和“message”(与前端代码通讯)事件Cache是一个存储API,与LocalStorage类似IndexDB,每个网络源或域都有自己对应的存储空间,其中包括不重名的cache对象,用于存储HTTP请求与应答内容。Channel Messaging是脚本之间通讯API,支持包括主页面、iframe、Web Worker、Service Worker之间的双向通讯。缓存策略缓存诸如静态资源和API端口返回的数据并不复杂,可在Service Worker安装时缓存即可。然后,当页面向API端口发送请求时,Service Worker会当即返回缓存的内容,且可按需在后台拉取资源并更新缓存内容。缓存模型就稍许繁琐,一个模型通常会转换生成数个资源,生成的资源也时常引用其他素材,所以需要找出所有这些依赖并按需将其缓存。在本文的代码示例中,我们在后台写有接口,可根据模型的URN查询并返回所需资源的URL列表。因而在缓存模型时,Service Worker可以调用该接口缓存所有相关的URL,无需用到Viewer。代码示例我们制作了让用户选择模型作离线缓存的例子,查看代码请访问:https://github.com/petrbroz/f…,在线演示请访问:https://forge-offline.herokua…。接下来我们讲解一些具体的代码片段。例子的后台基于Express,public目录的内容作静态托管,其它服务端口位于以下三个路径:GET /api/token - 返回验证TokenGET /api/models - 返回可浏览的模型列表GET /api/models/:urn/files - 根据模型的URN查询并返回所需资源的URL列表客户端包括两个核心脚本:public/javascript/main.js和public/service-worker.js,其中public/javascript/main.js主要用于配置Viewer和UI逻辑,有两个重要的函数在脚本底部:initServiceWorker和submitWorkerTask,前者触发Service Worker的注册,后者向其发送消息:async function initServiceWorker() { try { const registration = await navigator.serviceWorker.register(’/service-worker.js’); console.log(‘Service worker registered’, registration.scope); } catch (err) { console.error(‘Could not register service worker’, err); }}On the activate event, we claim control of all instances of our web application potentially running in different browser tabs.async function activateAsync() { const clients = await self.clients.matchAll({ includeUncontrolled: true }); console.log(‘Claiming clients’, clients.map(client => client.url).join(’,’)); await self.clients.claim();}When intercepting requests via the fetch event, we reply with a cached response if there is one. One exception is the GET /api/token endpoint. Since our access token has an expiration time, we try to get a fresh token first, and only fall back to the cached one if we don’t succeed.async function fetchAsync(event) { // When requesting an access token, try getting a fresh one first if (event.request.url.endsWith(’/api/token’)) { try { const response = await fetch(event.request); return response; } catch(err) { console.log(‘Could not fetch new token, falling back to cache.’, err); } } // If there’s a cache match, return it const match = await caches.match(event.request.url, { ignoreSearch: true }); if (match) { // If this is a static asset or known API, try updating the cache as well if (STATIC_URLS.includes(event.request.url) || API_URLS.includes(event.request.url)) { caches.open(CACHE_NAME) .then((cache) => cache.add(event.request)) .catch((err) => console.log(‘Cache not updated, but that's ok…’, err)); } return match; } return fetch(event.request);}Finally, using the message event we execute “tasks” from the client.async function messageAsync(event) { switch (event.data.operation) { case ‘CACHE_URN’: try { const urls = await cacheUrn(event.data.urn, event.data.access_token); event.ports[0].postMessage({ status: ‘ok’, urls }); } catch(err) { event.ports[0].postMessage({ error: err.toString() }); } break; case ‘CLEAR_URN’: try { const urls = await clearUrn(event.data.urn); event.ports[0].postMessage({ status: ‘ok’, urls }); } catch(err) { event.ports[0].postMessage({ error: err.toString() }); } break; case ‘LIST_CACHES’: try { const urls = await listCached(); event.ports[0].postMessage({ status: ‘ok’, urls }); } catch(err) { event.ports[0].postMessage({ error: err.toString() }); } break; }}async function cacheUrn(urn, access_token) { console.log(‘Caching’, urn); // First, ask our server for all derivatives in this URN, and their file URLs const baseUrl = ‘https://developer.api.autodesk.com/derivativeservice/v2'; const res = await fetch(/api/models/${urn}/files); const derivatives = await res.json(); // Prepare fetch requests to cache all the URLs const cache = await caches.open(CACHE_NAME); const options = { headers: { ‘Authorization’: ‘Bearer ’ + access_token } }; const fetches = []; const manifestUrl = ${baseUrl}/manifest/${urn}; fetches.push(fetch(manifestUrl, options).then(resp => cache.put(manifestUrl, resp)).then(() => manifestUrl)); for (const derivative of derivatives) { const derivUrl = baseUrl + ‘/derivatives/’ + encodeURIComponent(derivative.urn); fetches.push(fetch(derivUrl, options).then(resp => cache.put(derivUrl, resp)).then(() => derivUrl)); for (const file of derivative.files) { const fileUrl = baseUrl + ‘/derivatives/’ + encodeURIComponent(derivative.basePath + file); fetches.push(fetch(fileUrl, options).then(resp => cache.put(fileUrl, resp)).then(() => fileUrl)); } } // Fetch and cache all URLs in parallel const urls = await Promise.all(fetches); return urls;}async function clearUrn(urn) { console.log(‘Clearing cache’, urn); const cache = await caches.open(CACHE_NAME); const requests = (await cache.keys()).filter(req => req.url.includes(urn)); await Promise.all(requests.map(req => cache.delete(req))); return requests.map(req => req.url);}async function listCached() { console.log(‘Listing caches’); const cache = await caches.open(CACHE_NAME); const requests = await cache.keys(); return requests.map(req => req.url);}And that’s pretty much it. If you want to see this code in action, head over to https://forge-offline.herokua… with your favorite (modern) browser, open the dev. tools, and try caching on of the listed models using the ☆ symbol next to their title.