{"id":3452,"date":"2021-10-28T09:21:49","date_gmt":"2021-10-28T09:21:49","guid":{"rendered":"https:\/\/techpolicy.org.il\/?p=3452"},"modified":"2021-11-08T02:40:44","modified_gmt":"2021-11-08T02:40:44","slug":"artificial-intelligence-in-healthcare-and-social-justice-barriers-and-responses-2","status":"publish","type":"post","link":"https:\/\/techpolicy.org.il\/he\/blog\/artificial-intelligence-in-healthcare-and-social-justice-barriers-and-responses-2\/","title":{"rendered":"Artificial Intelligence in Healthcare and Social Justice: Barriers and Responses"},"content":{"rendered":"<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"405\" src=\"https:\/\/techpolicy.org.il\/wp-content\/uploads\/2021\/10\/iStock-AndreyPopov-924555546_WEB-720x405-1.jpeg\" alt=\"\" class=\"wp-image-3453\" srcset=\"https:\/\/techpolicy.org.il\/wp-content\/uploads\/2021\/10\/iStock-AndreyPopov-924555546_WEB-720x405-1.jpeg 720w, https:\/\/techpolicy.org.il\/wp-content\/uploads\/2021\/10\/iStock-AndreyPopov-924555546_WEB-720x405-1-300x169.jpeg 300w, https:\/\/techpolicy.org.il\/wp-content\/uploads\/2021\/10\/iStock-AndreyPopov-924555546_WEB-720x405-1-16x9.jpeg 16w\" sizes=\"auto, (max-width: 720px) 100vw, 720px\" \/><\/figure><\/div>\n\n\n\r\n<!--\r\n<div class=\"fpf-media-content\" style=\"\">\r\n\t<div class=\" non-popout fpf-media-content__wrap\" style=\"\">\r\n\t\t<div class=\"container fpf-media-content__row\">\r\n\t\t\t\t\t\t\t<div class=\"fpf-media-content__img\">\r\n\t\t\t\t\t\t\t\t\t<\/div>\r\n\t\t\t\t\t\t<div class=\"fpf-media-content__text\"><\/div>\r\n\t\t\t\t\t<\/div>\r\n\t<\/div>\r\n<\/div>\r\n<hr>\r\n-->\r\n<div class=\"mike-media-and-text-block\" style=\"\">\r\n\t<div class=\"non-popout mike-content__wrap\" style=\"\">\r\n\t\t\t\t\t<div class=\"mike-content__img\">\r\n\t\t\t\t\t\t\t<\/div>\r\n\t\t\t\t<div class=\"mike-content__text\"><\/div>\r\n\t\t\t<\/div>\r\n<\/div>\r\n\r\n\n\n\n<p>It is universally accepted that everyone has the<em> <\/em><a rel=\"noreferrer noopener\" href=\"https:\/\/www.ohchr.org\/en\/professionalinterest\/pages\/cescr.aspx\" target=\"_blank\"><em>right to the enjoyment of the highest attainable standard of physical and mental health<\/em>.<\/a> The right to health implies <a rel=\"noreferrer noopener\" href=\"https:\/\/www.ohchr.org\/documents\/publications\/factsheet31.pdf\" target=\"_blank\">various other entitlements<\/a>, such as the right to a system of health providing equal opportunity for enjoying the highest attainable level of health, the right to prevention, treatment and control of diseases; access to essential treatments and medicines; and equal and timely access to basic health services.<\/p>\n\n\n\n<p>Artificial Intelligence (AI) healthcare technologies are on the brink of becoming the highest standard of health. A wide range of AI-based healthcare technologists is in various stages of implementation: <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC6697552\/\" target=\"_blank\" rel=\"noreferrer noopener\">telehealth<\/a>; tools used for <a href=\"https:\/\/www.aidoc.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">prediction and diagnosis<\/a> of illness; tools for <a href=\"https:\/\/venturebeat.com\/2021\/01\/19\/oncohost-raises-8-million-to-develop-ai-that-predicts-cancer-treatment-responses\/\" target=\"_blank\" rel=\"noreferrer noopener\">predicting patient response to treatment;<\/a> decision support systems; conversational agents and virtual personal health care assistants, and more. Their universal attainability, however, is another matter.<\/p>\n\n\n\n<p>AI healthcare technologies, holding great promise for improved healthcare outcomes, carry a dual (conflicting) potential for narrowing healthcare disparities and, at the same time, exacerbating them.<\/p>\n\n\n\n<p>Reducing healthcare disparities by AI technologies occurs through <a rel=\"noreferrer noopener\" href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC6859518\/\" target=\"_blank\">facilitating healthcare delivery and consumption<\/a>. Telemedicine is a familiar example. Another, more specific example, is the harnessing of <a href=\"https:\/\/www.eastmojo.com\/news\/2021\/08\/08\/how-digital-health-drones-can-transform-healthcare-in-northeast-india\/\" target=\"_blank\" rel=\"noreferrer noopener\">digital health and drones in Northeast India<\/a> to overcome geographical barriers and improve accessibility to remote, difficult geographical locations due to topographical constraints and poor road connectivity. These can improve access to vaccines and other medical products, as well as to lab samples. AI will be able to take such use of drones beyond proactive home service, to provide personalised healthcare services.<\/p>\n\n\n\n<p>Such technologies, therefore, present an opportunity for population groups typically excluded from medical research or struggling to gain access to conventional healthcare services, to benefit from improved accessible healthcare, thereby somewhat rectifying social injustices. The availability of AI healthcare technologies and their relative ease of accessibility also present a tangible option for democratising healthcare in challenging areas.<\/p>\n\n\n\n<p>On the other hand, by being particularly inaccessible to lower socioeconomic populations due to various inherent barriers described below, such technologies may exacerbate existing social and healthcare disparities and accentuate bias. Furthermore, once becoming more prevalent, offering an alternative to conventional healthcare, AI healthcare technologies may increasingly replace health professionals, leading to the <a href=\"http:\/\/www.assembly.coe.int\/LifeRay\/SOC\/Pdf\/TextesProvisoires\/2020\/20200922-HealthCareAI-EN.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">reduction in their number and the deskilling<\/a> of those still practising. That also stands to adversely affect disadvantaged populations, relying on traditional healthcare. These looming outcomes mandate caution and policy-planning in the widespread application of AI technologies in healthcare.<\/p>\n\n\n\n<p><strong>Barriers to reducing global disparities via AI healthcare technology<\/strong><\/p>\n\n\n\n<p>From a socio-global perspective, for AI healthcare technology to be effectively harnessed and used in lower socioeconomic populations and <a href=\"https:\/\/www.ft.com\/content\/8649e35f-29d2-4da0-a1cd-7eece48b7152\" target=\"_blank\" rel=\"noreferrer noopener\">developing countries, various barriers need to be primarily overcome<\/a>. I shall address the following key challenging barriers:<\/p>\n\n\n\n<ol class=\"wp-block-list\" type=\"1\"><li><strong>Health illiteracy<\/strong><\/li><\/ol>\n\n\n\n<p>Commonly associated with access to health care and uptake thereof, <a href=\"https:\/\/www.ohchr.org\/documents\/publications\/factsheet31.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">the right to health extends to include health-related education and information<\/a>, among other <a href=\"https:\/\/www.kff.org\/racial-equity-and-health-policy\/issue-brief\/beyond-health-care-the-role-of-social-determinants-in-promoting-health-and-health-equity\/;https:\/www.americanactionforum.org\/research\/understanding-the-social-determinants-of-health\/\" target=\"_blank\" rel=\"noreferrer noopener\">social determinants of health<\/a>. Health literacy is a complementary, subjective aspect pertaining to the ability to realise one&#8217;s right to health in an age of ubiquitous and democratised medical knowledge and empowered, self-health managing patients. It is essentially about people&#8217;s <a href=\"https:\/\/www.researchgate.net\/publication\/281629581_The_European_Health_Literacy_Survey_Results_from_Ireland\" target=\"_blank\" rel=\"noreferrer noopener\">capacities, skills, and motivation<\/a> to understand, access, and apply health information.<\/p>\n\n\n\n<p>A more advanced, digital-leaning version of health literacy concerns the <em><a href=\"file:\/\/\/C:\/Users\/User\/Downloads\/9781447344520_webpdf.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">interaction dimension<\/a><\/em>, namely: the &#8216;individual ability and motivation to engage with digital services and the feeling of being safe and in control of digital technology.&#8217;<\/p>\n\n\n\n<p>Health illiteracy is the &#8216;<a href=\"https:\/\/www.thelancet.com\/pdfs\/journals\/lancet\/PIIS0140-6736(09)62137-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">inability to comprehend and use medical information<\/a> that can affect access to and use of the healthcare system&#8217; and the processing and application of health information in the context of disease prevention. Such inability can consequently affect the <a href=\"file:\/\/\/C:\/Users\/User\/Downloads\/9781447344520_webpdf.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">capacity of the health system itself &#8216;to serve patients and clients.<\/a>&#8216; Health illiteracy, <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC5069402\/\" target=\"_blank\" rel=\"noreferrer noopener\">associated with overall poorer health<\/a>, is so widespread and disconcerting that it gained the term <a href=\"https:\/\/www.thelancet.com\/pdfs\/journals\/lancet\/PIIS0140-6736(09)62137-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">&#8216;the silent epidemic&#8217;<\/a>. Apparently, <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC5069402\/\" target=\"_blank\" rel=\"noreferrer noopener\">half of American adults<\/a> exhibit low health literacy. The <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/25843827\/;%20https:\/cdn1.sph.harvard.edu\/wp-content\/uploads\/sites\/135\/2015\/09\/neu_rev_hls-eu_report_2015_05_13_lit.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">European health literacy survey (HLS-EU)<\/a>, conducted in 2011 across eight European countries, found that 12% of respondents have insufficient health literacy, and <a href=\"https:\/\/cdn1.sph.harvard.edu\/wp-content\/uploads\/sites\/135\/2015\/09\/neu_rev_hls-eu_report_2015_05_13_lit.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">47% have limited <\/a>(insufficient or problematic) health literacy.<\/p>\n\n\n\n<p>In lower socioeconomic populations, digital illiteracy (discussed below) is often accompanied by health illiteracy. The combination of the two seems to yield insecurity and (uninformed) distrust in health information technologies, thus serving as a deterrent to their adoption by those who may stand to profit the most from technological advances.<\/p>\n\n\n\n<p>2. <strong>Digital illiteracy and the &#8216;digital divide&#8217;<\/strong><\/p>\n\n\n\n<p>As our daily lives become ever-increasingly digital, digital literacy \u2013 the ability and skills to autonomously and successfully navigate digital environments \u2013 becomes a <em>sine-qua-non<\/em> condition for managing one&#8217;s life in the digital and virtual space. This goes beyond the point of personal convenience and social acceptance to the fundamental ability to acquire various services and the realisation of one&#8217;s human and civil rights. They all depend on our ability to agreeably interact with digitation.<\/p>\n\n\n\n<p>Digital illiteracy is typically supplemented by (physical and financial) inaccessibility to internet connectivity, information and communication technology infrastructure, and devices \u2013 what is dubbed by the World Health Organisation (WHO), the <em>&#8216;digital divide&#8217;<\/em>.<\/p>\n\n\n\n<p>This is a fundamental barrier to the effective uptake of AI healthcare tools in resource-poor countries and rural areas. <a href=\"https:\/\/www.who.int\/publications\/i\/item\/9789240029200\" target=\"_blank\" rel=\"noreferrer noopener\">The &#8216;digital divide&#8217; refers to <\/a>inequitable &#8216;distribution of access to, use of or effect&#8217; of digital resources among distinct groups. Given the dynamicity of emerging AI healthcare solutions and their inherent personal and public health benefits, the digital divide is not a static, descriptive concept but one carrying the potential to exacerbate existing healthcare inequalities, unless countries take appropriate measures to tackle it. But responsible and ethical governance does not stop here. <a href=\"https:\/\/www.who.int\/publications\/i\/item\/9789240029200\" target=\"_blank\" rel=\"noreferrer noopener\">Technology providers will also be required<\/a> to play their part in the interest of social justice in healthcare, by providing affordable devices and interoperable infrastructure and services, to allow different platforms\/applications to operate seamlessly with one another.<\/p>\n\n\n\n<p>The essentiality of digital literacy in our digital age, and the dependency on such literacy for accessing medical services and monitoring, were greatly felt and emphasised during the COVID-19 pandemic, particularly for <a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/feduc.2021.716025\/full\" target=\"_blank\" rel=\"noreferrer noopener\">the <\/a><a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/feduc.2021.716025\/full\">elderly<\/a>, circumscribed by social isolation.<\/p>\n\n\n\n<p>3. <strong><a href=\"https:\/\/case.edu\/law\/sites\/case.edu.law\/files\/2021-01\/Sharona%20CLE%202-2021_0.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Algorithmic discrimination<\/a><\/strong><\/p>\n\n\n\n<p>Medical algorithms can often be plagued with bias historically embedded into the data upon which an algorithm is trained and operated. Oftentimes, it is the non-diversity and misrepresentation of underserved populations in the training set that has a direct bearing on <a href=\"https:\/\/www.wired.com\/story\/how-algorithm-favored-whites-over-blacks-health-care\/\" target=\"_blank\" rel=\"noreferrer noopener\">the algorithm&#8217;s ability to reduce various unexplained disparities<\/a>.This, in turn, generates (or rather, reflects) discrimination against particular groups, creates novel health inequalities, or exacerbates existing ones.<\/p>\n\n\n\n<p>An often-cited example is that of a widely deployed <a href=\"https:\/\/science.sciencemag.org\/content\/366\/6464\/447\" target=\"_blank\" rel=\"noreferrer noopener\">algorithm used by health systems to identify patients<\/a> who would be candidates for &#8216;high risk-care management&#8217;, thus potentially benefiting from special attention. The algorithm, relying on patients\u2019 medical histories and past healthcare expenditure to predict medical risks, exhibited significant racial bias, since racial minorities typically have lower accessibility to healthcare services and spend less money on health care than other social groups. Consideration of actual healthcare expenditure, therefore, failed to authentically reflect these patients\u2019 health risks or status.<\/p>\n\n\n\n<p><strong>Measures to mitigate healthcare disparities via AI healthcare technologies<\/strong><\/p>\n\n\n<p><a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/feduc.2021.716025\/full\" target=\"_blank\" rel=\"noreferrer noopener\" data-rich-text-format-boundary=\"true\">1. <strong>Digital inclusion<\/strong><\/a><\/p>\n<p>Digital inclusion is mainly about education in information and communication technologies, and the development of basic ability and set of digital skills to manage one&#8217;s health on digital platforms and in other walks of (digitised) life. Where digital <em>exclusion<\/em> is identified, a potential solution can come in the form of accessible and affordable <a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/feduc.2021.716025\/full\">d<\/a><a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/feduc.2021.716025\/full\" target=\"_blank\" rel=\"noreferrer noopener\">igital literacy workshops<\/a> for digitally illiterate populations. Higher rates of digital inclusion will increase potential uptake of AI healthcare technologies, which, in turn, can improve access to healthcare and reduce health inequities.<\/p>\n<p>2. <strong>Increasing health data inclusivity<\/strong><\/p>\n<p>As noted above, health data misrepresentation of marginalized communities, or in other words \u2013 individuals who are not Caucasians of European descent, is an infamous fact. Being excluded from health datasets entails the inapplicability of newly developed drugs, therapies and various biomedical technologies for such populations. Inclusion of misrepresented groups and in health (mainly, genetic) databases through research participation and a calculated collection of more representative health data, will ensure that \u2018<a href=\"https:\/\/www.theguardian.com\/technology\/2021\/oct\/20\/ai-projects-to-tackle-racial-inequality-in-uk-healthcare-says-javid\">datasets for training and testing AI healthcare technologies are diverse and inclusive<\/a>.\u2019 The UK, for instance, is set to implement a series of hi-tech initiatives for tackling health disparities among Black, Asian and minority ethnic Britons. One such initiative would be <a href=\"https:\/\/www.theguardian.com\/technology\/2021\/oct\/20\/ai-projects-to-tackle-racial-inequality-in-uk-healthcare-says-javid\">drawing up new standards for health data inclusivity<\/a>.<\/p>\n<p>3. <strong>Correcting Algorithmic discrimination<\/strong><\/p>\n<p>While (medical) algorithms are typically perceived as a cause of bias, some suggest they can be harnessed to reduce health disparities.Arguably, this can be done by <a href=\"https:\/\/science.sciencemag.org\/content\/366\/6464\/447\">reformulating the algorithm<\/a> so that it no longer uses bias-inducing data to eliminate bias, or by taking a preemptive approach, e.g., by proactively harnessing <a href=\"https:\/\/www.wired.com\/story\/new-algorithms-reduce-racial-disparities-health-care\/?redirectURL=https%3A%2F%2Fwww.wired.com%2Fstory%2Fnew-algorithms-reduce-racial-disparities-health-care%2F\">algorithms to remedy racial disparities in healthcare<\/a>. This can broadly include using algorithms to investigate factors behind adverse health outcomes for patients of underserved communities, such as UK <a href=\"https:\/\/www.bbc.com\/news\/uk-england-47115305\">Black women\u2019s five-fold higher mortality rate (compared with white women) due to pregnancy-related complications<\/a>.<\/p>\n<p style=\"text-align: left;\">Another illustration of the corrective power of algorithms, is the development of an <a style=\"font-size: inherit;\" href=\"https:\/\/www.nature.com\/articles\/s41591-020-01192-7.epdf?sharing_token=EOIUcFcZ_FM-EZDHp_zmXtRgN0jAjWel9jnR3ZoTv0OcAj1CPFP1e_AfHyGwWzFr1ZD3Fwf146k_4JuUnydLYeEhV22L9cazL3BNEbLZPYZWkup2DncRhUrHYljXyLXT2gV50gZD5oOrkc3UE2FCAYEM2_ynjLx79K6Zqj0GkYSgQbfLvEFb-ZIqSVvRAf6scxNw-r2ST2kNBmAzlMHkZejZ-oVBsJJQZ8LYV5_84c8%3D&amp;tracking_referrer=www.wired.comhttps:\/\/www.nature.com\/articles\/s41591-020-01192-7?utm_medium=affiliate&amp;utm_source=commission_junction&amp;utm_campaign=3_nsn6445_deeplink_PID100095187&amp;utm_content=deeplink\">algorithmic approach aimed at reducing unexplained pain disparities<\/a><span style=\"font-size: inherit;\"> in marginalised populations. Such approach can potentially improve prognosis and risk assessment in such populations.&nbsp; &nbsp;While National Institutes of Health data indicates that Black patients and lower-income populations report higher levels of pain, a <\/span><a style=\"font-size: inherit;\" href=\"https:\/\/www.wired.com\/story\/new-algorithms-reduce-racial-disparities-health-care\/?redirectURL=https%3A%2F%2Fwww.wired.com%2Fstory%2Fnew-algorithms-reduce-racial-disparities-health-care%2F\">recent study of an AI system in radiology<\/a><span style=\"font-size: inherit;\"> found that &#8216;radiologists may have literal blind spots when it comes to reading Black patients&#8217; x-rays.&#8217; That is, they are simply not \u2018as proficient in assessing knee pain in Black patients.\u2019 This should come as no surprise, as presently used pain grading is based on &#8216;a small 1957 study in a northern England mill town with a less diverse population than the modern US.&#8217; The AI in radiology study concluded that algorithms trained on Black patients\u2019 own accounts of pain, rather than mimicking medical experts&#8217; opinions, can promote more equitable healthcare.<\/span><\/p>\n<p>On the other hand, a <a href=\"https:\/\/www.nejm.org\/doi\/full\/10.1056\/NEJMms2004740\">2020 study published in <em>New England Journal of Medicine<\/em><\/a> has illustrated the potential for <em>race-adjusted algorithms<\/em> \u2013 namely, &#8216;diagnostic algorithms\u2026 that adjust or &#8220;correct&#8221; their outputs on the basis of a patient&#8217;s race or ethnicity&#8217;&nbsp; \u2013 to perpetuate or exacerbate race-based health inequities. It was found that many of those algorithms including race in the basic data, guide clinical decisions in ways that may direct more attention or resources to white patients over patients of racial and ethnic minorities. The consideration of race\/ethnicity also impairs individualised risk assessment, as such <a href=\"https:\/\/case.edu\/law\/sites\/case.edu.law\/files\/2021-01\/Sharona%20CLE%202-2021_0.pdf\">algorithms typically &#8216;underestimated African Americans&#8217; risks<\/a> of kidney stones, death from heart failure, and other medical problems.&#8217;<\/p>\n<p>One may suggest that the merits of using <em>race-adjusted algorithms<\/em> are yet to be determined, and until then, they should be evaluated according to circumstance and with great caution.<\/p>\n\n\n<p><strong>Conclusion<\/strong><\/p>\n\n\n\n<p>To conclude, AI healthcare technologies have the power to remedy past wrongs, as well as perpetuate them. From a global perspective, they also carry the potential to <a href=\"https:\/\/www.ft.com\/content\/8649e35f-29d2-4da0-a1cd-7eece48b7152\">democratise healthcare<\/a> particularly in developing countries, providing that deployment of AI within resource-poor healthcare providers is more equitable. Gradual removal of health and digital literacy barriers and proactive, thoughtful and creative design of medical algorithms, can be valuable in remedying some health inequalities and promoting social health justice amid medically neglected populations.<\/p>\n\n\n\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>It is universally accepted that everyone has the right to the enjoyment of the highest attainable standard of physical and mental health. The right to health implies various other entitlements, [&hellip;]<\/p>\n","protected":false},"author":15,"featured_media":3453,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_seopress_robots_primary_cat":"9","_seopress_titles_title":"Artificial Intelligence in Healthcare and Social Justice: Barriers and Responses %%post_title%% %%sitetitle%% %%sep%%","_seopress_titles_desc":"With the increasing advent of Artificial Intelligence (AI) healthcare technologies \u2013 health illiteracy, digital illiteracy and the &#039;digital divide&#039;, and algorithmic discrimination are becoming prominent barriers to the realisation of the universal right to health. Such obstacles impede social justice by creating healthcare disparities or exacerbating existing ones. In this blog article, we introduce these barriers and point out three potential measures for mitigating such disparities via AI healthcare technologies: digital inclusion, increasing health data inclusivity, and correcting algorithmic discrimination, all in the interest of equitable realization of the right to health through advanced health-promoting tools. ","_seopress_robots_index":"","inline_featured_image":false,"footnotes":""},"categories":[9,10],"tags":[41,56,61,36,59,58,60],"publication_type":[],"class_list":["post-3452","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog","category-in-the-news","tag-ai","tag-artificial-intelligence","tag-digital-divide","tag-digital-health","tag-digital-illiteracy","tag-digital-inclusion","tag-health-illiteracy"],"acf":[],"_links":{"self":[{"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/posts\/3452","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/users\/15"}],"replies":[{"embeddable":true,"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/comments?post=3452"}],"version-history":[{"count":28,"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/posts\/3452\/revisions"}],"predecessor-version":[{"id":3569,"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/posts\/3452\/revisions\/3569"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/media\/3453"}],"wp:attachment":[{"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/media?parent=3452"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/categories?post=3452"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/tags?post=3452"},{"taxonomy":"publication_type","embeddable":true,"href":"https:\/\/techpolicy.org.il\/he\/wp-json\/wp\/v2\/publication_type?post=3452"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}