{"id":22329,"date":"2019-03-28T06:30:55","date_gmt":"2019-03-28T13:30:55","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=22329"},"modified":"2019-03-29T08:45:12","modified_gmt":"2019-03-29T15:45:12","slug":"distributed-gpu-deep-learning-training","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/","title":{"rendered":"Distributed GPU Performance for Deep Learning Training"},"content":{"rendered":"<p><em>HPE highlights recent research that explores the performance of GPUs in scale-out and scale-up scenarios for deep learning training.\u00a0<\/em><\/p>\n<div id=\"attachment_22332\" style=\"width: 343px\" class=\"wp-caption alignright\"><img aria-describedby=\"caption-attachment-22332\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-22332 \" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/shutterstock_747164902-1.jpg\" alt=\"deep learning training\" width=\"333\" height=\"222\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/shutterstock_747164902-1.jpg 500w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/shutterstock_747164902-1-150x100.jpg 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/shutterstock_747164902-1-300x200.jpg 300w\" sizes=\"(max-width: 333px) 100vw, 333px\" \/><p id=\"caption-attachment-22332\" class=\"wp-caption-text\">Comparisons can be made between the same number of GPUs in scale-up and scale-out configurations from 2 to 16 GPUs. (Photo: Shutterstock\/Pasuwan)<\/p><\/div>\n<p>As companies begin to move deep learning projects from the conceptual stage into a production environment to impact the business, it is reasonable to assume that models will become more complex, the quantity of data involved will grow even further, and that GPU clusters will begin to scale. Companies are using distributed GPU clusters to decrease training time with the Horovod training framework, which was developed by Uber.<\/p>\n<p>The HPE white paper, &#8220;<a href=\"https:\/\/www.hpe.com\/us\/en\/resources\/storage\/requirements-distributed-ai.html\" target=\"_blank\" rel=\"noopener\">Accelerate performance for production AI<\/a>,&#8221; examines the impact of storage on distributed scale-out and scale-up scenarios with common Deep Learning (DL) benchmarks. While the paper shows the storage throughput and bandwidth requirements for both scale-up and scale-out training, it also reveals performance for the same number of GPUs in a scale-up scenario, i.e. GPUs within a single server, versus a scale-out scenario, i.e. GPUs distributed across servers.<\/p>\n<p>The benchmarks were run on 4 HPE Apollo 6500 Gen10 systems with 8 NVIDIA Tesla V100 SXM2 16GB GPUs. A Mellanox 100 Gb\/s EDR InfiniBand network connected the servers themselves, as well as to WekaIO Matrix storage cluster of 8 HPE ProLiant DL360 Gen10 Servers with a total of 32 NVMe SSDs. Further information on the benchmark configuration can be found in the white paper.<\/p>\n<p>A parallel data approach was used to distribute the training for a model across the servers. Each server completes its share of the training, and the results are shared between the servers to calculate an overall update to the model.<\/p>\n<p>Consider the real-data results for ResNet50 with one to thirty two GPUs in permutations of 1, 2, or 4 nodes:<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-22334 aligncenter\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/HPE_nodes.jpg\" alt=\"deep learning training\" width=\"442\" height=\"225\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/HPE_nodes.jpg 442w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/HPE_nodes-150x76.jpg 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/HPE_nodes-300x153.jpg 300w\" sizes=\"(max-width: 442px) 100vw, 442px\" \/><\/p>\n<p>Comparisons can be made between the same number of <a href=\"https:\/\/insidehpc.com\/?s=GPUs\" target=\"_blank\" rel=\"noopener\">GPUs<\/a> in scale-up and scale-out configurations from 2 to 16 GPUs. The largest difference is with 16 GPUs, with an 8.5% difference in performance between 2 servers with 8 GPUs versus 4 servers with 4 GPUs each. The next greatest difference is one of 4.9% within the 4 GPU scenario between a single node with 4 GPUs and 2 nodes with 2 GPUs each. Overall, performance is very linear between scale-up and scale-out configurations for the same number of GPUs. Performance as the number of GPUs increase is fairly linear up to 8 GPUs, but then falls to around a 70% increase to 16 GPUs, still a meaningful improvement.<\/p>\n<blockquote><p>Workloads can be scheduled and strategies developed to minimize individual job times, or to maximize the overall number of jobs to be completed within a given time period.<\/p><\/blockquote>\n<p>These results imply that workloads can be managed effectively through scale-out allocation of GPUs. This provides flexibility in server allocation to match workload requirements. While further testing is required with larger numbers of GPUs, these benchmarks indicate that the increased performance from adding GPUs is fairly predictable, which means time to solution can also be managed to a reasonable extent. In other words, resources and time can be managed.<\/p>\n<p>Workloads can be scheduled and strategies developed to minimize individual job times, or to maximize the overall number of jobs to be completed within a given time period. For instance, many different models could be tested initially with small data sets, then the system could be configured to aggregate resources to minimize throughput time for a particular model with larger production data sets. Or if there is a time deadline by which training must be completed, or if it simply takes too long to complete training, distributing the workload across many GPUs can be used to reduce training time. This flexibility allows GPU resources to be maximally utilized and provides high ROI since time to results can be minimized.<\/p>\n<p><em>Read about the benchmarks and their results in the white paper: <a href=\"https:\/\/www.hpe.com\/us\/en\/resources\/storage\/requirements-distributed-ai.html\" target=\"_blank\" rel=\"noopener\">Accelerate performance for production AI<\/a> (gated asset)<\/em><\/p>\n<p><em>Learn more about NVIDIA Volta, the Tensor Core GPU architecture designed to bring AI to every industry:\u00a0<a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/volta-gpu-architecture\/\" target=\"_blank\" rel=\"noopener\">NVIDIA Volta<\/a><\/em><\/p>\n<p><em>Learn more about HPC and AI storage <a href=\"https:\/\/www.hpe.com\/us\/en\/solutions\/hpc-high-performance-computing\/storage.html\" target=\"_blank\" rel=\"noopener\">here.<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>If there is a time deadline by which training must be completed, or if it simply takes too long to complete training, distributing the workload across many GPUs can be used to reduce training time.\u00a0 This flexibility allows GPU resources to be maximally utilized and provides high ROI since time to results can be minimized. HPE highlights recent research that explores the performance of GPUs in a scale-out and scale-up scenarios for deep learning training.\u00a0<\/p>\n","protected":false},"author":10513,"featured_media":22332,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[182,71,87,180,59,67,56,57],"tags":[264,736,538,277,95],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Deep Learning Training: Distributed GPU Performance<\/title>\n<meta name=\"description\" content=\"HPE highlights recent research that explores the performance of GPUs in a scale-out and scale-up scenarios for deep learning training.\u00a0\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deep Learning Training: Distributed GPU Performance\" \/>\n<meta property=\"og:description\" content=\"HPE highlights recent research that explores the performance of GPUs in a scale-out and scale-up scenarios for deep learning training.\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2019-03-28T13:30:55+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2019-03-29T15:45:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/shutterstock_747164902-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"500\" \/>\n\t<meta property=\"og:image:height\" content=\"334\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/\",\"url\":\"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/\",\"name\":\"Deep Learning Training: Distributed GPU Performance\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2019-03-28T13:30:55+00:00\",\"dateModified\":\"2019-03-29T15:45:12+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"description\":\"HPE highlights recent research that explores the performance of GPUs in a scale-out and scale-up scenarios for deep learning training.\u00a0\",\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Distributed GPU Performance for Deep Learning Training\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Deep Learning Training: Distributed GPU Performance","description":"HPE highlights recent research that explores the performance of GPUs in a scale-out and scale-up scenarios for deep learning training.\u00a0","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/","og_locale":"en_US","og_type":"article","og_title":"Deep Learning Training: Distributed GPU Performance","og_description":"HPE highlights recent research that explores the performance of GPUs in a scale-out and scale-up scenarios for deep learning training.\u00a0","og_url":"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2019-03-28T13:30:55+00:00","article_modified_time":"2019-03-29T15:45:12+00:00","og_image":[{"width":500,"height":334,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/shutterstock_747164902-1.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/","url":"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/","name":"Deep Learning Training: Distributed GPU Performance","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2019-03-28T13:30:55+00:00","dateModified":"2019-03-29T15:45:12+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"description":"HPE highlights recent research that explores the performance of GPUs in a scale-out and scale-up scenarios for deep learning training.\u00a0","breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2019\/03\/28\/distributed-gpu-deep-learning-training\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Distributed GPU Performance for Deep Learning Training"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/shutterstock_747164902-1.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-5O9","jetpack-related-posts":[{"id":13379,"url":"https:\/\/insidebigdata.com\/2015\/07\/15\/nvidia-doubles-performance-for-deep-learning-training\/","url_meta":{"origin":22329,"position":0},"title":"NVIDIA Doubles Performance for Deep Learning Training","date":"July 15, 2015","format":false,"excerpt":"NVIDIA announced updates to its GPU-accelerated deep learning software that will double deep learning training performance. The new software will empower data scientists and researchers to supercharge their deep learning projects and product development work.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":22491,"url":"https:\/\/insidebigdata.com\/2019\/04\/16\/prepare-for-production-ai-with-the-hpe-ai-data-node\/","url_meta":{"origin":22329,"position":1},"title":"Prepare for Production AI with the HPE AI Data Node","date":"April 16, 2019","format":false,"excerpt":"https:\/\/www.youtube.com\/watch?v=xDJvLCwgASA In this video from GTC 2019 in San Jose, Harvey Skinner, Distinguished Technologist, discusses the advent of production AI and how the HPE AI Data Node offers a building block for AI storage. Commercial enterprises have been investigating and exploring how AI can improve their business. Now they\u2019re ready\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/xDJvLCwgASA\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":21331,"url":"https:\/\/insidebigdata.com\/2018\/10\/25\/time-value-ai-deep-learning-insights\/","url_meta":{"origin":22329,"position":2},"title":"Accelerate Time to Value and AI Insights","date":"October 25, 2018","format":false,"excerpt":"In this edition of Industry Perspectives, HPE explores how reducing the cycle time for inferencing helps to accelerate time to market for deep learning and AI insights and solutions.\u00a0","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"deep learning","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/10\/shutterstock_1096541144.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":29146,"url":"https:\/\/insidebigdata.com\/2022\/04\/27\/hewlett-packard-enterprise-accelerates-ai-journey-from-poc-to-production-with-new-solution-for-ai-development-and-training-at-scale\/","url_meta":{"origin":22329,"position":3},"title":"Hewlett Packard Enterprise Accelerates AI Journey from POC to Production with New Solution for AI Development and Training at Scale","date":"April 27, 2022","format":false,"excerpt":"Hewlett Packard Enterprise (NYSE: HPE) announced that it is removing barriers for enterprises to easily build and train machine learning models at scale, to realize value faster, with the new HPE Machine Learning Development System. The new system, which is purpose-built for AI, is an end-to-end solution that integrates a\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/machine-learning_SHUTTERSTOCK.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":22319,"url":"https:\/\/insidebigdata.com\/2019\/03\/21\/scaling-production-ai-enterprises\/","url_meta":{"origin":22329,"position":4},"title":"Scaling Production AI","date":"March 21, 2019","format":false,"excerpt":"As AI models grow larger and more complex, it requires a server architecture that looks much like high performance computing (HPC), with workloads scaled across many servers and distributed processing across the server infrastructure. Barbara Murphy, VP of Marketing, WekaIO, explores how as AI production models grow larger and more\u2026","rel":"","context":"In &quot;Enterprise&quot;","img":{"alt_text":"production AI","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/Barbara-Murphy-hi-res-e1551998824321.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":16723,"url":"https:\/\/insidebigdata.com\/2016\/12\/18\/one-stop-systems-introduces-a-new-line-of-gpu-accelerated-servers-for-deep-learning\/","url_meta":{"origin":22329,"position":5},"title":"One Stop Systems Introduces a New Line of GPU Accelerated Servers for Deep Learning","date":"December 18, 2016","format":false,"excerpt":"One Stop Systems, Inc. (OSS), a leader in PCI Express\u00ae (PCIe\u00ae) expansion technology, introduces two new deep learning appliances, OSS-PASCAL4 and OSS-PASCAL8. The OSS-PASCAL8 is a 170 TeraFLOP engine with 80GB\/s NVIDIA\u00ae NVLink\u2122 for the largest deep learning models.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/22329"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=22329"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/22329\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/22332"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=22329"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=22329"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=22329"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}