共计 5559 个字符,预计需要花费 14 分钟才能阅读完成。
由于 Autodesk Forge 是完全基于 RESTful API 框架的云平台,且暂时没有本地部署方案,特别是 Viewer.js 暂不支持本地搭建,必须外部引用位于服务器端的脚本,如何满足离线应用的需求一直是广大开发者关注的问题。文本将介绍来自 Forge 顾问团队的国际同事 Petr 原创的 Viewer 缓存范例,利用 HTML5 广泛用于 PWA(Progessive Web App)开发的典型接口实现。
时至今日,要把来自网络应用或服务的数据缓存至本地设备,我们有几种不同的技术与方式可选,本文将示范使用 Service Worker,Cache 和 Channel Messaging 等 API 实现,它们都是开发 Progrssive Web App 的常客。虽然这些 API 相对较为新锐,但已获得新版浏览器的广发支持,详细支持情况可以参考:
当我们在 JavaScript 中注册了 Service Worker 之后,该 Worker 会拦截浏览器所有页面对于指定网络源或域的请求,返回缓存中的内容,Service Worker 亦可以调用 IndexDB、Channel Messaging、Push 等 API。Service Worker 在 Worker 上下文中执行,无法直接对 DOM 进行操作,可以独立于页面的形式控制页面的加载。一个 Service Worker 可以控制多个页面,每当指定范围内的页面加载时,Service Worker 便于会对其进行安装并操作,所以请小心使用全局变量,每个页面并没有自身独立的 Worker。
Service Worker 的生命周期如下:
在 JavaScript 中注册 Service Worker
浏览器下载并执行 Worker 脚本
Worker 收到“install”(安装)事件,一次性配置所需资源
等待其他正在执行的 Service Worker 结束
Worker 收到“activate”(激活) 事件,清除 Worker 的旧 cache
Worker 开始接受“fetch”(拦截网络请求并返回缓存中的资源)和“message”(与前端代码通讯)事件
Cache 是一个存储 API,与 LocalStorage 类似 IndexDB,每个网络源或域都有自己对应的存储空间,其中包括不重名的 cache 对象,用于存储 HTTP 请求与应答内容。
Channel Messaging 是脚本之间通讯 API,支持包括主页面、iframe、Web Worker、Service Worker 之间的双向通讯。
缓存策略
缓存诸如静态资源和 API 端口返回的数据并不复杂,可在 Service Worker 安装时缓存即可。然后,当页面向 API 端口发送请求时,Service Worker 会当即返回缓存的内容,且可按需在后台拉取资源并更新缓存内容。
缓存模型就稍许繁琐,一个模型通常会转换生成数个资源,生成的资源也时常引用其他素材,所以需要找出所有这些依赖并按需将其缓存。在本文的代码示例中,我们在后台写有接口,可根据模型的 URN 查询并返回所需资源的 URL 列表。因而在缓存模型时,Service Worker 可以调用该接口缓存所有相关的 URL,无需用到 Viewer。
代码示例
我们制作了让用户选择模型作离线缓存的例子,查看代码请访问:https://github.com/petrbroz/f…,在线演示请访问:https://forge-offline.herokua…。接下来我们讲解一些具体的代码片段。
例子的后台基于 Express,public 目录的内容作静态托管,其它服务端口位于以下三个路径:
GET /api/token – 返回验证 Token
GET /api/models – 返回可浏览的模型列表
GET /api/models/:urn/files – 根据模型的 URN 查询并返回所需资源的 URL 列表
客户端包括两个核心脚本:public/javascript/main.js 和 public/service-worker.js,其中 public/javascript/main.js 主要用于配置 Viewer 和 UI 逻辑,有两个重要的函数在脚本底部:initServiceWorker 和 submitWorkerTask,前者触发 Service Worker 的注册,后者向其发送消息:
async function initServiceWorker() {
try {
const registration = await navigator.serviceWorker.register(‘/service-worker.js’);
console.log(‘Service worker registered’, registration.scope);
} catch (err) {
console.error(‘Could not register service worker’, err);
}
}
On the activate event, we claim control of all instances of our web application potentially running in different browser tabs.
async function activateAsync() {
const clients = await self.clients.matchAll({includeUncontrolled: true});
console.log(‘Claiming clients’, clients.map(client => client.url).join(‘,’));
await self.clients.claim();
}
When intercepting requests via the fetch event, we reply with a cached response if there is one. One exception is the GET /api/token endpoint. Since our access token has an expiration time, we try to get a fresh token first, and only fall back to the cached one if we don’t succeed.
async function fetchAsync(event) {
// When requesting an access token, try getting a fresh one first
if (event.request.url.endsWith(‘/api/token’)) {
try {
const response = await fetch(event.request);
return response;
} catch(err) {
console.log(‘Could not fetch new token, falling back to cache.’, err);
}
}
// If there’s a cache match, return it
const match = await caches.match(event.request.url, { ignoreSearch: true});
if (match) {
// If this is a static asset or known API, try updating the cache as well
if (STATIC_URLS.includes(event.request.url) || API_URLS.includes(event.request.url)) {
caches.open(CACHE_NAME)
.then((cache) => cache.add(event.request))
.catch((err) => console.log(‘Cache not updated, but that\’s ok…’, err));
}
return match;
}
return fetch(event.request);
}
Finally, using the message event we execute “tasks” from the client.
async function messageAsync(event) {
switch (event.data.operation) {
case ‘CACHE_URN’:
try {
const urls = await cacheUrn(event.data.urn, event.data.access_token);
event.ports[0].postMessage({status: ‘ok’, urls});
} catch(err) {
event.ports[0].postMessage({error: err.toString() });
}
break;
case ‘CLEAR_URN’:
try {
const urls = await clearUrn(event.data.urn);
event.ports[0].postMessage({status: ‘ok’, urls});
} catch(err) {
event.ports[0].postMessage({error: err.toString() });
}
break;
case ‘LIST_CACHES’:
try {
const urls = await listCached();
event.ports[0].postMessage({status: ‘ok’, urls});
} catch(err) {
event.ports[0].postMessage({error: err.toString() });
}
break;
}
}
async function cacheUrn(urn, access_token) {
console.log(‘Caching’, urn);
// First, ask our server for all derivatives in this URN, and their file URLs
const baseUrl = ‘https://developer.api.autodesk.com/derivativeservice/v2’;
const res = await fetch(`/api/models/${urn}/files`);
const derivatives = await res.json();
// Prepare fetch requests to cache all the URLs
const cache = await caches.open(CACHE_NAME);
const options = {headers: { ‘Authorization’: ‘Bearer ‘ + access_token} };
const fetches = [];
const manifestUrl = `${baseUrl}/manifest/${urn}`;
fetches.push(fetch(manifestUrl, options).then(resp => cache.put(manifestUrl, resp)).then(() => manifestUrl));
for (const derivative of derivatives) {
const derivUrl = baseUrl + ‘/derivatives/’ + encodeURIComponent(derivative.urn);
fetches.push(fetch(derivUrl, options).then(resp => cache.put(derivUrl, resp)).then(() => derivUrl));
for (const file of derivative.files) {
const fileUrl = baseUrl + ‘/derivatives/’ + encodeURIComponent(derivative.basePath + file);
fetches.push(fetch(fileUrl, options).then(resp => cache.put(fileUrl, resp)).then(() => fileUrl));
}
}
// Fetch and cache all URLs in parallel
const urls = await Promise.all(fetches);
return urls;
}
async function clearUrn(urn) {
console.log(‘Clearing cache’, urn);
const cache = await caches.open(CACHE_NAME);
const requests = (await cache.keys()).filter(req => req.url.includes(urn));
await Promise.all(requests.map(req => cache.delete(req)));
return requests.map(req => req.url);
}
async function listCached() {
console.log(‘Listing caches’);
const cache = await caches.open(CACHE_NAME);
const requests = await cache.keys();
return requests.map(req => req.url);
}
And that’s pretty much it. If you want to see this code in action, head over to https://forge-offline.herokua… with your favorite (modern) browser, open the dev. tools, and try caching on of the listed models using the ☆ symbol next to their title.