1/* C++ modules. Experimental!
2 Copyright (C) 2017-2024 Free Software Foundation, Inc.
3 Written by Nathan Sidwell <nathan@acm.org> while at FaceBook
4
5 This file is part of GCC.
6
7 GCC is free software; you can redistribute it and/or modify it
8 under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 3, or (at your option)
10 any later version.
11
12 GCC is distributed in the hope that it will be useful, but
13 WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 General Public License for more details.
16
17You should have received a copy of the GNU General Public License
18along with GCC; see the file COPYING3. If not see
19<http://www.gnu.org/licenses/>. */
20
21/* Comments in this file have a non-negligible chance of being wrong
22 or at least inaccurate. Due to (a) my misunderstanding, (b)
23 ambiguities that I have interpretted differently to original intent
24 (c) changes in the specification, (d) my poor wording, (e) source
25 changes. */
26
27/* (Incomplete) Design Notes
28
29 A hash table contains all module names. Imported modules are
30 present in a modules array, which by construction places an
31 import's dependencies before the import itself. The single
32 exception is the current TU, which always occupies slot zero (even
33 when it is not a module).
34
35 Imported decls occupy an entity_ary, an array of binding_slots, indexed
36 by importing module and index within that module. A flat index is
37 used, as each module reserves a contiguous range of indices.
38 Initially each slot indicates the CMI section containing the
39 streamed decl. When the decl is imported it will point to the decl
40 itself.
41
42 Additionally each imported decl is mapped in the entity_map via its
43 DECL_UID to the flat index in the entity_ary. Thus we can locate
44 the index for any imported decl by using this map and then
45 de-flattening the index via a binary seach of the module vector.
46 Cross-module references are by (remapped) module number and
47 module-local index.
48
49 Each importable DECL contains several flags. The simple set are
50 DECL_MODULE_EXPORT_P, DECL_MODULE_PURVIEW_P, DECL_MODULE_ATTACH_P
51 and DECL_MODULE_IMPORT_P. The first indicates whether it is
52 exported, the second whether it is in module or header-unit
53 purview. The third indicates it is attached to the named module in
54 whose purview it resides and the fourth indicates whether it was an
55 import into this TU or not. DECL_MODULE_ATTACH_P will be false for
56 all decls in a header-unit, and for those in a named module inside
57 a linkage declaration.
58
59 The more detailed flags are DECL_MODULE_PARTITION_P,
60 DECL_MODULE_ENTITY_P. The first is set in a primary interface unit
61 on decls that were read from module partitions (these will have
62 DECL_MODULE_IMPORT_P set too). Such decls will be streamed out to
63 the primary's CMI. DECL_MODULE_ENTITY_P is set when an entity is
64 imported, even if it matched a non-imported entity. Such a decl
65 will not have DECL_MODULE_IMPORT_P set, even though it has an entry
66 in the entity map and array.
67
68 Header units are module-like.
69
70 For namespace-scope lookup, the decls for a particular module are
71 held located in a sparse array hanging off the binding of the name.
72 This is partitioned into two: a few fixed slots at the start
73 followed by the sparse slots afterwards. By construction we only
74 need to append new slots to the end -- there is never a need to
75 insert in the middle. The fixed slots are MODULE_SLOT_CURRENT for
76 the current TU (regardless of whether it is a module or not),
77 MODULE_SLOT_GLOBAL and MODULE_SLOT_PARTITION. These latter two
78 slots are used for merging entities across the global module and
79 module partitions respectively. MODULE_SLOT_PARTITION is only
80 present in a module. Neither of those two slots is searched during
81 name lookup -- they are internal use only. This vector is created
82 lazily once we require it, if there is only a declaration from the
83 current TU, a regular binding is present. It is converted on
84 demand.
85
86 OPTIMIZATION: Outside of the current TU, we only need ADL to work.
87 We could optimize regular lookup for the current TU by glomming all
88 the visible decls on its slot. Perhaps wait until design is a
89 little more settled though.
90
91 There is only one instance of each extern-linkage namespace. It
92 appears in every module slot that makes it visible. It also
93 appears in MODULE_SLOT_GLOBAL. (It is an ODR violation if they
94 collide with some other global module entity.) We also have an
95 optimization that shares the slot for adjacent modules that declare
96 the same such namespace.
97
98 A module interface compilation produces a Compiled Module Interface
99 (CMI). The format used is Encapsulated Lazy Records Of Numbered
100 Declarations, which is essentially ELF's section encapsulation. (As
101 all good nerds are aware, Elrond is half Elf.) Some sections are
102 named, and contain information about the module as a whole (indices
103 etc), and other sections are referenced by number. Although I
104 don't defend against actively hostile CMIs, there is some
105 checksumming involved to verify data integrity. When dumping out
106 an interface, we generate a graph of all the
107 independently-redeclarable DECLS that are needed, and the decls
108 they reference. From that we determine the strongly connected
109 components (SCC) within this TU. Each SCC is dumped to a separate
110 numbered section of the CMI. We generate a binding table section,
111 mapping each namespace&name to a defining section. This allows
112 lazy loading.
113
114 Lazy loading employs mmap to map a read-only image of the CMI.
115 It thus only occupies address space and is paged in on demand,
116 backed by the CMI file itself. If mmap is unavailable, regular
117 FILEIO is used. Also, there's a bespoke ELF reader/writer here,
118 which implements just the section table and sections (including
119 string sections) of a 32-bit ELF in host byte-order. You can of
120 course inspect it with readelf. I figured 32-bit is sufficient,
121 for a single module. I detect running out of section numbers, but
122 do not implement the ELF overflow mechanism. At least you'll get
123 an error if that happens.
124
125 We do not separate declarations and definitions. My guess is that
126 if you refer to the declaration, you'll also need the definition
127 (template body, inline function, class definition etc). But this
128 does mean we can get larger SCCs than if we separated them. It is
129 unclear whether this is a win or not.
130
131 Notice that we embed section indices into the contents of other
132 sections. Thus random manipulation of the CMI file by ELF tools
133 may well break it. The kosher way would probably be to introduce
134 indirection via section symbols, but that would require defining a
135 relocation type.
136
137 Notice that lazy loading of one module's decls can cause lazy
138 loading of other decls in the same or another module. Clearly we
139 want to avoid loops. In a correct program there can be no loops in
140 the module dependency graph, and the above-mentioned SCC algorithm
141 places all intra-module circular dependencies in the same SCC. It
142 also orders the SCCs wrt each other, so dependent SCCs come first.
143 As we load dependent modules first, we know there can be no
144 reference to a higher-numbered module, and because we write out
145 dependent SCCs first, likewise for SCCs within the module. This
146 allows us to immediately detect broken references. When loading,
147 we must ensure the rest of the compiler doesn't cause some
148 unconnected load to occur (for instance, instantiate a template).
149
150Classes used:
151
152 dumper - logger
153
154 data - buffer
155
156 bytes_in : data - scalar reader
157 bytes_out : data - scalar writer
158
159 bytes_in::bits_in - bit stream reader
160 bytes_out::bits_out - bit stream writer
161
162 elf - ELROND format
163 elf_in : elf - ELROND reader
164 elf_out : elf - ELROND writer
165
166 trees_in : bytes_in - tree reader
167 trees_out : bytes_out - tree writer
168
169 depset - dependency set
170 depset::hash - hash table of depsets
171 depset::tarjan - SCC determinator
172
173 uidset<T> - set T's related to a UID
174 uidset<T>::hash hash table of uidset<T>
175
176 loc_spans - location map data
177
178 module_state - module object
179
180 slurping - data needed during loading
181
182 macro_import - imported macro data
183 macro_export - exported macro data
184
185 The ELROND objects use mmap, for both reading and writing. If mmap
186 is unavailable, fileno IO is used to read and write blocks of data.
187
188 The mapper object uses fileno IO to communicate with the server or
189 program. */
190
191/* In expermental (trunk) sources, MODULE_VERSION is a #define passed
192 in from the Makefile. It records the modification date of the
193 source directory -- that's the only way to stay sane. In release
194 sources, we (plan to) use the compiler's major.minor versioning.
195 While the format might not change between at minor versions, it
196 seems simplest to tie the two together. There's no concept of
197 inter-version compatibility. */
198#define IS_EXPERIMENTAL(V) ((V) >= (1U << 20))
199#define MODULE_MAJOR(V) ((V) / 10000)
200#define MODULE_MINOR(V) ((V) % 10000)
201#define EXPERIMENT(A,B) (IS_EXPERIMENTAL (MODULE_VERSION) ? (A) : (B))
202#ifndef MODULE_VERSION
203#include "bversion.h"
204#define MODULE_VERSION (BUILDING_GCC_MAJOR * 10000U + BUILDING_GCC_MINOR)
205#elif !IS_EXPERIMENTAL (MODULE_VERSION)
206#error "This is not the version I was looking for."
207#endif
208
209#define _DEFAULT_SOURCE 1 /* To get TZ field of struct tm, if available. */
210#include "config.h"
211#define INCLUDE_MEMORY
212#define INCLUDE_STRING
213#define INCLUDE_VECTOR
214#include "system.h"
215#include "coretypes.h"
216#include "cp-tree.h"
217#include "timevar.h"
218#include "stringpool.h"
219#include "dumpfile.h"
220#include "bitmap.h"
221#include "cgraph.h"
222#include "varasm.h"
223#include "tree-iterator.h"
224#include "cpplib.h"
225#include "mkdeps.h"
226#include "incpath.h"
227#include "libiberty.h"
228#include "stor-layout.h"
229#include "version.h"
230#include "tree-diagnostic.h"
231#include "toplev.h"
232#include "opts.h"
233#include "attribs.h"
234#include "intl.h"
235#include "langhooks.h"
236/* This TU doesn't need or want to see the networking. */
237#define CODY_NETWORKING 0
238#include "mapper-client.h"
239#include <zlib.h> // for crc32, crc32_combine
240
241#if 0 // 1 for testing no mmap
242#define MAPPED_READING 0
243#define MAPPED_WRITING 0
244#else
245#if HAVE_MMAP_FILE && _POSIX_MAPPED_FILES > 0
246/* mmap, munmap. */
247#define MAPPED_READING 1
248#if HAVE_SYSCONF && defined (_SC_PAGE_SIZE)
249/* msync, sysconf (_SC_PAGE_SIZE), ftruncate */
250/* posix_fallocate used if available. */
251#define MAPPED_WRITING 1
252#else
253#define MAPPED_WRITING 0
254#endif
255#else
256#define MAPPED_READING 0
257#define MAPPED_WRITING 0
258#endif
259#endif
260
261/* Some open(2) flag differences, what a colourful world it is! */
262#if defined (O_CLOEXEC)
263// OK
264#elif defined (_O_NOINHERIT)
265/* Windows' _O_NOINHERIT matches O_CLOEXEC flag */
266#define O_CLOEXEC _O_NOINHERIT
267#else
268#define O_CLOEXEC 0
269#endif
270#if defined (O_BINARY)
271// Ok?
272#elif defined (_O_BINARY)
273/* Windows' open(2) call defaults to text! */
274#define O_BINARY _O_BINARY
275#else
276#define O_BINARY 0
277#endif
278
279static inline cpp_hashnode *cpp_node (tree id)
280{
281 return CPP_HASHNODE (GCC_IDENT_TO_HT_IDENT (id));
282}
283
284static inline tree identifier (const cpp_hashnode *node)
285{
286 /* HT_NODE() expands to node->ident that HT_IDENT_TO_GCC_IDENT()
287 then subtracts a nonzero constant, deriving a pointer to
288 a different member than ident. That's strictly undefined
289 and detected by -Warray-bounds. Suppress it. See PR 101372. */
290#pragma GCC diagnostic push
291#pragma GCC diagnostic ignored "-Warray-bounds"
292 return HT_IDENT_TO_GCC_IDENT (HT_NODE (const_cast<cpp_hashnode *> (node)));
293#pragma GCC diagnostic pop
294}
295
296/* Id for dumping module information. */
297int module_dump_id;
298
299/* We have a special module owner. */
300#define MODULE_UNKNOWN (~0U) /* Not yet known. */
301
302/* Prefix for section names. */
303#define MOD_SNAME_PFX ".gnu.c++"
304
305/* Format a version for user consumption. */
306
307typedef char verstr_t[32];
308static void
309version2string (unsigned version, verstr_t &out)
310{
311 unsigned major = MODULE_MAJOR (version);
312 unsigned minor = MODULE_MINOR (version);
313
314 if (IS_EXPERIMENTAL (version))
315 sprintf (s: out, format: "%04u/%02u/%02u-%02u:%02u%s",
316 2000 + major / 10000, (major / 100) % 100, (major % 100),
317 minor / 100, minor % 100,
318 EXPERIMENT ("", " (experimental)"));
319 else
320 sprintf (s: out, format: "%u.%u", major, minor);
321}
322
323/* Include files to note translation for. */
324static vec<const char *, va_heap, vl_embed> *note_includes;
325
326/* Modules to note CMI pathames. */
327static vec<const char *, va_heap, vl_embed> *note_cmis;
328
329/* Traits to hash an arbitrary pointer. Entries are not deletable,
330 and removal is a noop (removal needed upon destruction). */
331template <typename T>
332struct nodel_ptr_hash : pointer_hash<T>, typed_noop_remove <T *> {
333 /* Nothing is deletable. Everything is insertable. */
334 static bool is_deleted (T *) { return false; }
335 static void mark_deleted (T *) { gcc_unreachable (); }
336};
337
338/* Map from pointer to signed integer. */
339typedef simple_hashmap_traits<nodel_ptr_hash<void>, int> ptr_int_traits;
340typedef hash_map<void *,signed,ptr_int_traits> ptr_int_hash_map;
341
342/********************************************************************/
343/* Basic streaming & ELF. Serialization is usually via mmap. For
344 writing we slide a buffer over the output file, syncing it
345 approproiately. For reading we simply map the whole file (as a
346 file-backed read-only map -- it's just address space, leaving the
347 OS pager to deal with getting the data to us). Some buffers need
348 to be more conventional malloc'd contents. */
349
350/* Variable length buffer. */
351
352namespace {
353class data {
354public:
355 class allocator {
356 public:
357 /* Tools tend to moan if the dtor's not virtual. */
358 virtual ~allocator () {}
359
360 public:
361 void grow (data &obj, unsigned needed, bool exact);
362 void shrink (data &obj);
363
364 public:
365 virtual char *grow (char *ptr, unsigned needed);
366 virtual void shrink (char *ptr);
367 };
368
369public:
370 char *buffer; /* Buffer being transferred. */
371 /* Although size_t would be the usual size, we know we never get
372 more than 4GB of buffer -- because that's the limit of the
373 encapsulation format. And if you need bigger imports, you're
374 doing it wrong. */
375 unsigned size; /* Allocated size of buffer. */
376 unsigned pos; /* Position in buffer. */
377
378public:
379 data ()
380 :buffer (NULL), size (0), pos (0)
381 {
382 }
383 ~data ()
384 {
385 /* Make sure the derived and/or using class know what they're
386 doing. */
387 gcc_checking_assert (!buffer);
388 }
389
390protected:
391 char *use (unsigned count)
392 {
393 if (size < pos + count)
394 return NULL;
395 char *res = &buffer[pos];
396 pos += count;
397 return res;
398 }
399
400 unsigned calc_crc (unsigned) const;
401
402public:
403 void unuse (unsigned count)
404 {
405 pos -= count;
406 }
407
408public:
409 static allocator simple_memory;
410};
411} // anon namespace
412
413/* The simple data allocator. */
414data::allocator data::simple_memory;
415
416/* Grow buffer to at least size NEEDED. */
417
418void
419data::allocator::grow (data &obj, unsigned needed, bool exact)
420{
421 gcc_checking_assert (needed ? needed > obj.size : !obj.size);
422 if (!needed)
423 /* Pick a default size. */
424 needed = EXPERIMENT (100, 1000);
425
426 if (!exact)
427 needed *= 2;
428 obj.buffer = grow (ptr: obj.buffer, needed);
429 if (obj.buffer)
430 obj.size = needed;
431 else
432 obj.pos = obj.size = 0;
433}
434
435/* Free a buffer. */
436
437void
438data::allocator::shrink (data &obj)
439{
440 shrink (ptr: obj.buffer);
441 obj.buffer = NULL;
442 obj.size = 0;
443}
444
445char *
446data::allocator::grow (char *ptr, unsigned needed)
447{
448 return XRESIZEVAR (char, ptr, needed);
449}
450
451void
452data::allocator::shrink (char *ptr)
453{
454 XDELETEVEC (ptr);
455}
456
457/* Calculate the crc32 of the buffer. Note the CRC is stored in the
458 first 4 bytes, so don't include them. */
459
460unsigned
461data::calc_crc (unsigned l) const
462{
463 return crc32 (crc: 0, buf: (unsigned char *)buffer + 4, len: l - 4);
464}
465
466class elf_in;
467
468/* Byte stream reader. */
469
470namespace {
471class bytes_in : public data {
472 typedef data parent;
473
474protected:
475 bool overrun; /* Sticky read-too-much flag. */
476
477public:
478 bytes_in ()
479 : parent (), overrun (false)
480 {
481 }
482 ~bytes_in ()
483 {
484 }
485
486public:
487 /* Begin reading a named section. */
488 bool begin (location_t loc, elf_in *src, const char *name);
489 /* Begin reading a numbered section with optional name. */
490 bool begin (location_t loc, elf_in *src, unsigned, const char * = NULL);
491 /* Complete reading a buffer. Propagate errors and return true on
492 success. */
493 bool end (elf_in *src);
494 /* Return true if there is unread data. */
495 bool more_p () const
496 {
497 return pos != size;
498 }
499
500public:
501 /* Start reading at OFFSET. */
502 void random_access (unsigned offset)
503 {
504 if (offset > size)
505 set_overrun ();
506 pos = offset;
507 }
508
509public:
510 void align (unsigned boundary)
511 {
512 if (unsigned pad = pos & (boundary - 1))
513 read (count: boundary - pad);
514 }
515
516public:
517 const char *read (unsigned count)
518 {
519 char *ptr = use (count);
520 if (!ptr)
521 set_overrun ();
522 return ptr;
523 }
524
525public:
526 bool check_crc () const;
527 /* We store the CRC in the first 4 bytes, using host endianness. */
528 unsigned get_crc () const
529 {
530 return *(const unsigned *)&buffer[0];
531 }
532
533public:
534 /* Manipulate the overrun flag. */
535 bool get_overrun () const
536 {
537 return overrun;
538 }
539 void set_overrun ()
540 {
541 overrun = true;
542 }
543
544public:
545 unsigned u32 (); /* Read uncompressed integer. */
546
547public:
548 int c () ATTRIBUTE_UNUSED; /* Read a char. */
549 int i (); /* Read a signed int. */
550 unsigned u (); /* Read an unsigned int. */
551 size_t z (); /* Read a size_t. */
552 HOST_WIDE_INT wi (); /* Read a HOST_WIDE_INT. */
553 unsigned HOST_WIDE_INT wu (); /* Read an unsigned HOST_WIDE_INT. */
554 const char *str (size_t * = NULL); /* Read a string. */
555 const void *buf (size_t); /* Read a fixed-length buffer. */
556 cpp_hashnode *cpp_node (); /* Read a cpp node. */
557
558 struct bits_in;
559 bits_in stream_bits ();
560};
561} // anon namespace
562
563/* Verify the buffer's CRC is correct. */
564
565bool
566bytes_in::check_crc () const
567{
568 if (size < 4)
569 return false;
570
571 unsigned c_crc = calc_crc (l: size);
572 if (c_crc != get_crc ())
573 return false;
574
575 return true;
576}
577
578class elf_out;
579
580/* Byte stream writer. */
581
582namespace {
583class bytes_out : public data {
584 typedef data parent;
585
586public:
587 allocator *memory; /* Obtainer of memory. */
588
589public:
590 bytes_out (allocator *memory)
591 : parent (), memory (memory)
592 {
593 }
594 ~bytes_out ()
595 {
596 }
597
598public:
599 bool streaming_p () const
600 {
601 return memory != NULL;
602 }
603
604public:
605 void set_crc (unsigned *crc_ptr);
606
607public:
608 /* Begin writing, maybe reserve space for CRC. */
609 void begin (bool need_crc = true);
610 /* Finish writing. Spill to section by number. */
611 unsigned end (elf_out *, unsigned, unsigned *crc_ptr = NULL);
612
613public:
614 void align (unsigned boundary)
615 {
616 if (unsigned pad = pos & (boundary - 1))
617 write (count: boundary - pad);
618 }
619
620public:
621 char *write (unsigned count, bool exact = false)
622 {
623 if (size < pos + count)
624 memory->grow (obj&: *this, needed: pos + count, exact);
625 return use (count);
626 }
627
628public:
629 void u32 (unsigned); /* Write uncompressed integer. */
630
631public:
632 void c (unsigned char) ATTRIBUTE_UNUSED; /* Write unsigned char. */
633 void i (int); /* Write signed int. */
634 void u (unsigned); /* Write unsigned int. */
635 void z (size_t s); /* Write size_t. */
636 void wi (HOST_WIDE_INT); /* Write HOST_WIDE_INT. */
637 void wu (unsigned HOST_WIDE_INT); /* Write unsigned HOST_WIDE_INT. */
638 void str (const char *ptr)
639 {
640 str (ptr, strlen (s: ptr));
641 }
642 void cpp_node (const cpp_hashnode *node)
643 {
644 str ((const char *)NODE_NAME (node), NODE_LEN (node));
645 }
646 void str (const char *, size_t); /* Write string of known length. */
647 void buf (const void *, size_t); /* Write fixed length buffer. */
648 void *buf (size_t); /* Create a writable buffer */
649
650 struct bits_out;
651 bits_out stream_bits ();
652
653public:
654 /* Format a NUL-terminated raw string. */
655 void printf (const char *, ...) ATTRIBUTE_PRINTF_2;
656 void print_time (const char *, const tm *, const char *);
657
658public:
659 /* Dump instrumentation. */
660 static void instrument ();
661
662protected:
663 /* Instrumentation. */
664 static unsigned spans[4];
665 static unsigned lengths[4];
666};
667} // anon namespace
668
669/* Finish bit packet. Rewind the bytes not used. */
670
671static unsigned
672bit_flush (data& bits, uint32_t& bit_val, unsigned& bit_pos)
673{
674 gcc_assert (bit_pos);
675 unsigned bytes = (bit_pos + 7) / 8;
676 bits.unuse (count: 4 - bytes);
677 bit_pos = 0;
678 bit_val = 0;
679 return bytes;
680}
681
682/* Bit stream reader (RAII-enabled). Bools are packed into bytes. You
683 cannot mix bools and non-bools. Use bflush to flush the current stream
684 of bools on demand. Upon destruction bflush is called.
685
686 When reading, we don't know how many bools we'll read in. So read
687 4 bytes-worth, and then rewind when flushing if we didn't need them
688 all. You can't have a block of bools closer than 4 bytes to the
689 end of the buffer.
690
691 Both bits_in and bits_out maintain the necessary state for bit packing,
692 and since these objects are locally constructed the compiler can more
693 easily track their state across consecutive reads/writes and optimize
694 away redundant buffering checks. */
695
696struct bytes_in::bits_in {
697 bytes_in& in;
698 uint32_t bit_val = 0;
699 unsigned bit_pos = 0;
700
701 bits_in (bytes_in& in)
702 : in (in)
703 { }
704
705 ~bits_in ()
706 {
707 bflush ();
708 }
709
710 bits_in(bits_in&&) = default;
711 bits_in(const bits_in&) = delete;
712 bits_in& operator=(const bits_in&) = delete;
713
714 /* Completed a block of bools. */
715 void bflush ()
716 {
717 if (bit_pos)
718 bit_flush (bits&: in, bit_val, bit_pos);
719 }
720
721 /* Read one bit. */
722 bool b ()
723 {
724 if (!bit_pos)
725 bit_val = in.u32 ();
726 bool x = (bit_val >> bit_pos) & 1;
727 bit_pos = (bit_pos + 1) % 32;
728 return x;
729 }
730};
731
732/* Factory function for bits_in. */
733
734bytes_in::bits_in
735bytes_in::stream_bits ()
736{
737 return bits_in (*this);
738}
739
740/* Bit stream writer (RAII-enabled), counterpart to bits_in. */
741
742struct bytes_out::bits_out {
743 bytes_out& out;
744 uint32_t bit_val = 0;
745 unsigned bit_pos = 0;
746 char is_set = -1;
747
748 bits_out (bytes_out& out)
749 : out (out)
750 { }
751
752 ~bits_out ()
753 {
754 bflush ();
755 }
756
757 bits_out(bits_out&&) = default;
758 bits_out(const bits_out&) = delete;
759 bits_out& operator=(const bits_out&) = delete;
760
761 /* Completed a block of bools. */
762 void bflush ()
763 {
764 if (bit_pos)
765 {
766 out.u32 (bit_val);
767 out.lengths[2] += bit_flush (bits&: out, bit_val, bit_pos);
768 }
769 out.spans[2]++;
770 is_set = -1;
771 }
772
773 /* Write one bit.
774
775 It may be worth optimizing for most bools being zero. Some kind of
776 run-length encoding? */
777 void b (bool x)
778 {
779 if (is_set != x)
780 {
781 is_set = x;
782 out.spans[x]++;
783 }
784 out.lengths[x]++;
785 bit_val |= unsigned (x) << bit_pos++;
786 if (bit_pos == 32)
787 {
788 out.u32 (bit_val);
789 out.lengths[2] += bit_flush (bits&: out, bit_val, bit_pos);
790 }
791 }
792};
793
794/* Factory function for bits_out. */
795
796bytes_out::bits_out
797bytes_out::stream_bits ()
798{
799 return bits_out (*this);
800}
801
802/* Instrumentation. */
803unsigned bytes_out::spans[4];
804unsigned bytes_out::lengths[4];
805
806/* If CRC_PTR non-null, set the CRC of the buffer. Mix the CRC into
807 that pointed to by CRC_PTR. */
808
809void
810bytes_out::set_crc (unsigned *crc_ptr)
811{
812 if (crc_ptr)
813 {
814 gcc_checking_assert (pos >= 4);
815
816 unsigned crc = calc_crc (l: pos);
817 unsigned accum = *crc_ptr;
818 /* Only mix the existing *CRC_PTR if it is non-zero. */
819 accum = accum ? crc32_combine (accum, crc, pos - 4) : crc;
820 *crc_ptr = accum;
821
822 /* Buffer will be sufficiently aligned. */
823 *(unsigned *)buffer = crc;
824 }
825}
826
827/* Exactly 4 bytes. Used internally for bool packing and a few other
828 places. We can't simply use uint32_t because (a) alignment and
829 (b) we need little-endian for the bool streaming rewinding to make
830 sense. */
831
832void
833bytes_out::u32 (unsigned val)
834{
835 if (char *ptr = write (count: 4))
836 {
837 ptr[0] = val;
838 ptr[1] = val >> 8;
839 ptr[2] = val >> 16;
840 ptr[3] = val >> 24;
841 }
842}
843
844unsigned
845bytes_in::u32 ()
846{
847 unsigned val = 0;
848 if (const char *ptr = read (count: 4))
849 {
850 val |= (unsigned char)ptr[0];
851 val |= (unsigned char)ptr[1] << 8;
852 val |= (unsigned char)ptr[2] << 16;
853 val |= (unsigned char)ptr[3] << 24;
854 }
855
856 return val;
857}
858
859/* Chars are unsigned and written as single bytes. */
860
861void
862bytes_out::c (unsigned char v)
863{
864 if (char *ptr = write (count: 1))
865 *ptr = v;
866}
867
868int
869bytes_in::c ()
870{
871 int v = 0;
872 if (const char *ptr = read (count: 1))
873 v = (unsigned char)ptr[0];
874 return v;
875}
876
877/* Ints 7-bit as a byte. Otherwise a 3bit count of following bytes in
878 big-endian form. 4 bits are in the first byte. */
879
880void
881bytes_out::i (int v)
882{
883 if (char *ptr = write (count: 1))
884 {
885 if (v <= 0x3f && v >= -0x40)
886 *ptr = v & 0x7f;
887 else
888 {
889 unsigned bytes = 0;
890 int probe;
891 if (v >= 0)
892 for (probe = v >> 8; probe > 0x7; probe >>= 8)
893 bytes++;
894 else
895 for (probe = v >> 8; probe < -0x8; probe >>= 8)
896 bytes++;
897 *ptr = 0x80 | bytes << 4 | (probe & 0xf);
898 if ((ptr = write (count: ++bytes)))
899 for (; bytes--; v >>= 8)
900 ptr[bytes] = v & 0xff;
901 }
902 }
903}
904
905int
906bytes_in::i ()
907{
908 int v = 0;
909 if (const char *ptr = read (count: 1))
910 {
911 v = *ptr & 0xff;
912 if (v & 0x80)
913 {
914 unsigned bytes = (v >> 4) & 0x7;
915 v &= 0xf;
916 if (v & 0x8)
917 v |= -1 ^ 0x7;
918 /* unsigned necessary due to left shifts of -ve values. */
919 unsigned uv = unsigned (v);
920 if ((ptr = read (count: ++bytes)))
921 while (bytes--)
922 uv = (uv << 8) | (*ptr++ & 0xff);
923 v = int (uv);
924 }
925 else if (v & 0x40)
926 v |= -1 ^ 0x3f;
927 }
928
929 return v;
930}
931
932void
933bytes_out::u (unsigned v)
934{
935 if (char *ptr = write (count: 1))
936 {
937 if (v <= 0x7f)
938 *ptr = v;
939 else
940 {
941 unsigned bytes = 0;
942 unsigned probe;
943 for (probe = v >> 8; probe > 0xf; probe >>= 8)
944 bytes++;
945 *ptr = 0x80 | bytes << 4 | probe;
946 if ((ptr = write (count: ++bytes)))
947 for (; bytes--; v >>= 8)
948 ptr[bytes] = v & 0xff;
949 }
950 }
951}
952
953unsigned
954bytes_in::u ()
955{
956 unsigned v = 0;
957
958 if (const char *ptr = read (count: 1))
959 {
960 v = *ptr & 0xff;
961 if (v & 0x80)
962 {
963 unsigned bytes = (v >> 4) & 0x7;
964 v &= 0xf;
965 if ((ptr = read (count: ++bytes)))
966 while (bytes--)
967 v = (v << 8) | (*ptr++ & 0xff);
968 }
969 }
970
971 return v;
972}
973
974void
975bytes_out::wi (HOST_WIDE_INT v)
976{
977 if (char *ptr = write (count: 1))
978 {
979 if (v <= 0x3f && v >= -0x40)
980 *ptr = v & 0x7f;
981 else
982 {
983 unsigned bytes = 0;
984 HOST_WIDE_INT probe;
985 if (v >= 0)
986 for (probe = v >> 8; probe > 0x7; probe >>= 8)
987 bytes++;
988 else
989 for (probe = v >> 8; probe < -0x8; probe >>= 8)
990 bytes++;
991 *ptr = 0x80 | bytes << 4 | (probe & 0xf);
992 if ((ptr = write (count: ++bytes)))
993 for (; bytes--; v >>= 8)
994 ptr[bytes] = v & 0xff;
995 }
996 }
997}
998
999HOST_WIDE_INT
1000bytes_in::wi ()
1001{
1002 HOST_WIDE_INT v = 0;
1003 if (const char *ptr = read (count: 1))
1004 {
1005 v = *ptr & 0xff;
1006 if (v & 0x80)
1007 {
1008 unsigned bytes = (v >> 4) & 0x7;
1009 v &= 0xf;
1010 if (v & 0x8)
1011 v |= -1 ^ 0x7;
1012 /* unsigned necessary due to left shifts of -ve values. */
1013 unsigned HOST_WIDE_INT uv = (unsigned HOST_WIDE_INT) v;
1014 if ((ptr = read (count: ++bytes)))
1015 while (bytes--)
1016 uv = (uv << 8) | (*ptr++ & 0xff);
1017 v = (HOST_WIDE_INT) uv;
1018 }
1019 else if (v & 0x40)
1020 v |= -1 ^ 0x3f;
1021 }
1022
1023 return v;
1024}
1025
1026/* unsigned wide ints are just written as signed wide ints. */
1027
1028inline void
1029bytes_out::wu (unsigned HOST_WIDE_INT v)
1030{
1031 wi (v: (HOST_WIDE_INT) v);
1032}
1033
1034inline unsigned HOST_WIDE_INT
1035bytes_in::wu ()
1036{
1037 return (unsigned HOST_WIDE_INT) wi ();
1038}
1039
1040/* size_t written as unsigned or unsigned wide int. */
1041
1042inline void
1043bytes_out::z (size_t s)
1044{
1045 if (sizeof (s) == sizeof (unsigned))
1046 u (v: s);
1047 else
1048 wu (v: s);
1049}
1050
1051inline size_t
1052bytes_in::z ()
1053{
1054 if (sizeof (size_t) == sizeof (unsigned))
1055 return u ();
1056 else
1057 return wu ();
1058}
1059
1060/* Buffer simply memcpied. */
1061void *
1062bytes_out::buf (size_t len)
1063{
1064 align (boundary: sizeof (void *) * 2);
1065 return write (count: len);
1066}
1067
1068void
1069bytes_out::buf (const void *src, size_t len)
1070{
1071 if (void *ptr = buf (len))
1072 memcpy (dest: ptr, src: src, n: len);
1073}
1074
1075const void *
1076bytes_in::buf (size_t len)
1077{
1078 align (boundary: sizeof (void *) * 2);
1079 const char *ptr = read (count: len);
1080
1081 return ptr;
1082}
1083
1084/* strings as an size_t length, followed by the buffer. Make sure
1085 there's a NUL terminator on read. */
1086
1087void
1088bytes_out::str (const char *string, size_t len)
1089{
1090 z (s: len);
1091 if (len)
1092 {
1093 gcc_checking_assert (!string[len]);
1094 buf (src: string, len: len + 1);
1095 }
1096}
1097
1098const char *
1099bytes_in::str (size_t *len_p)
1100{
1101 size_t len = z ();
1102
1103 /* We're about to trust some user data. */
1104 if (overrun)
1105 len = 0;
1106 if (len_p)
1107 *len_p = len;
1108 const char *str = NULL;
1109 if (len)
1110 {
1111 str = reinterpret_cast<const char *> (buf (len: len + 1));
1112 if (!str || str[len])
1113 {
1114 set_overrun ();
1115 str = NULL;
1116 }
1117 }
1118 return str ? str : "";
1119}
1120
1121cpp_hashnode *
1122bytes_in::cpp_node ()
1123{
1124 size_t len;
1125 const char *s = str (len_p: &len);
1126 if (!len)
1127 return NULL;
1128 return ::cpp_node (id: get_identifier_with_length (s, len));
1129}
1130
1131/* Format a string directly to the buffer, including a terminating
1132 NUL. Intended for human consumption. */
1133
1134void
1135bytes_out::printf (const char *format, ...)
1136{
1137 va_list args;
1138 /* Exercise buffer expansion. */
1139 size_t len = EXPERIMENT (10, 500);
1140
1141 while (char *ptr = write (count: len))
1142 {
1143 va_start (args, format);
1144 size_t actual = vsnprintf (s: ptr, maxlen: len, format: format, arg: args) + 1;
1145 va_end (args);
1146 if (actual <= len)
1147 {
1148 unuse (count: len - actual);
1149 break;
1150 }
1151 unuse (count: len);
1152 len = actual;
1153 }
1154}
1155
1156void
1157bytes_out::print_time (const char *kind, const tm *time, const char *tz)
1158{
1159 printf (format: "%stime: %4u/%02u/%02u %02u:%02u:%02u %s",
1160 kind, time->tm_year + 1900, time->tm_mon + 1, time->tm_mday,
1161 time->tm_hour, time->tm_min, time->tm_sec, tz);
1162}
1163
1164/* Encapsulated Lazy Records Of Named Declarations.
1165 Header: Stunningly Elf32_Ehdr-like
1166 Sections: Sectional data
1167 [1-N) : User data sections
1168 N .strtab : strings, stunningly ELF STRTAB-like
1169 Index: Section table, stunningly ELF32_Shdr-like. */
1170
1171class elf {
1172protected:
1173 /* Constants used within the format. */
1174 enum private_constants {
1175 /* File kind. */
1176 ET_NONE = 0,
1177 EM_NONE = 0,
1178 OSABI_NONE = 0,
1179
1180 /* File format. */
1181 EV_CURRENT = 1,
1182 CLASS32 = 1,
1183 DATA2LSB = 1,
1184 DATA2MSB = 2,
1185
1186 /* Section numbering. */
1187 SHN_UNDEF = 0,
1188 SHN_LORESERVE = 0xff00,
1189 SHN_XINDEX = 0xffff,
1190
1191 /* Section types. */
1192 SHT_NONE = 0, /* No contents. */
1193 SHT_PROGBITS = 1, /* Random bytes. */
1194 SHT_STRTAB = 3, /* A string table. */
1195
1196 /* Section flags. */
1197 SHF_NONE = 0x00, /* Nothing. */
1198 SHF_STRINGS = 0x20, /* NUL-Terminated strings. */
1199
1200 /* I really hope we do not get CMI files larger than 4GB. */
1201 MY_CLASS = CLASS32,
1202 /* It is host endianness that is relevant. */
1203 MY_ENDIAN = DATA2LSB
1204#ifdef WORDS_BIGENDIAN
1205 ^ DATA2LSB ^ DATA2MSB
1206#endif
1207 };
1208
1209public:
1210 /* Constants visible to users. */
1211 enum public_constants {
1212 /* Special error codes. Breaking layering a bit. */
1213 E_BAD_DATA = -1, /* Random unexpected data errors. */
1214 E_BAD_LAZY = -2, /* Badly ordered laziness. */
1215 E_BAD_IMPORT = -3 /* A nested import failed. */
1216 };
1217
1218protected:
1219 /* File identification. On-disk representation. */
1220 struct ident {
1221 uint8_t magic[4]; /* 0x7f, 'E', 'L', 'F' */
1222 uint8_t klass; /* 4:CLASS32 */
1223 uint8_t data; /* 5:DATA2[LM]SB */
1224 uint8_t version; /* 6:EV_CURRENT */
1225 uint8_t osabi; /* 7:OSABI_NONE */
1226 uint8_t abiver; /* 8: 0 */
1227 uint8_t pad[7]; /* 9-15 */
1228 };
1229 /* File header. On-disk representation. */
1230 struct header {
1231 struct ident ident;
1232 uint16_t type; /* ET_NONE */
1233 uint16_t machine; /* EM_NONE */
1234 uint32_t version; /* EV_CURRENT */
1235 uint32_t entry; /* 0 */
1236 uint32_t phoff; /* 0 */
1237 uint32_t shoff; /* Section Header Offset in file */
1238 uint32_t flags;
1239 uint16_t ehsize; /* ELROND Header SIZE -- sizeof (header) */
1240 uint16_t phentsize; /* 0 */
1241 uint16_t phnum; /* 0 */
1242 uint16_t shentsize; /* Section Header SIZE -- sizeof (section) */
1243 uint16_t shnum; /* Section Header NUM */
1244 uint16_t shstrndx; /* Section Header STRing iNDeX */
1245 };
1246 /* File section. On-disk representation. */
1247 struct section {
1248 uint32_t name; /* String table offset. */
1249 uint32_t type; /* SHT_* */
1250 uint32_t flags; /* SHF_* */
1251 uint32_t addr; /* 0 */
1252 uint32_t offset; /* OFFSET in file */
1253 uint32_t size; /* SIZE of section */
1254 uint32_t link; /* 0 */
1255 uint32_t info; /* 0 */
1256 uint32_t addralign; /* 0 */
1257 uint32_t entsize; /* ENTry SIZE, usually 0 */
1258 };
1259
1260protected:
1261 data hdr; /* The header. */
1262 data sectab; /* The section table. */
1263 data strtab; /* String table. */
1264 int fd; /* File descriptor we're reading or writing. */
1265 int err; /* Sticky error code. */
1266
1267public:
1268 /* Construct from STREAM. E is errno if STREAM NULL. */
1269 elf (int fd, int e)
1270 :hdr (), sectab (), strtab (), fd (fd), err (fd >= 0 ? 0 : e)
1271 {}
1272 ~elf ()
1273 {
1274 gcc_checking_assert (fd < 0 && !hdr.buffer
1275 && !sectab.buffer && !strtab.buffer);
1276 }
1277
1278public:
1279 /* Return the error, if we have an error. */
1280 int get_error () const
1281 {
1282 return err;
1283 }
1284 /* Set the error, unless it's already been set. */
1285 void set_error (int e = E_BAD_DATA)
1286 {
1287 if (!err)
1288 err = e;
1289 }
1290 /* Get an error string. */
1291 const char *get_error (const char *) const;
1292
1293public:
1294 /* Begin reading/writing file. Return false on error. */
1295 bool begin () const
1296 {
1297 return !get_error ();
1298 }
1299 /* Finish reading/writing file. Return false on error. */
1300 bool end ();
1301};
1302
1303/* Return error string. */
1304
1305const char *
1306elf::get_error (const char *name) const
1307{
1308 if (!name)
1309 return "Unknown CMI mapping";
1310
1311 switch (err)
1312 {
1313 case 0:
1314 gcc_unreachable ();
1315 case E_BAD_DATA:
1316 return "Bad file data";
1317 case E_BAD_IMPORT:
1318 return "Bad import dependency";
1319 case E_BAD_LAZY:
1320 return "Bad lazy ordering";
1321 default:
1322 return xstrerror (err);
1323 }
1324}
1325
1326/* Finish file, return true if there's an error. */
1327
1328bool
1329elf::end ()
1330{
1331 /* Close the stream and free the section table. */
1332 if (fd >= 0 && close (fd: fd))
1333 set_error (errno);
1334 fd = -1;
1335
1336 return !get_error ();
1337}
1338
1339/* ELROND reader. */
1340
1341class elf_in : public elf {
1342 typedef elf parent;
1343
1344private:
1345 /* For freezing & defrosting. */
1346#if !defined (HOST_LACKS_INODE_NUMBERS)
1347 dev_t device;
1348 ino_t inode;
1349#endif
1350
1351public:
1352 elf_in (int fd, int e)
1353 :parent (fd, e)
1354 {
1355 }
1356 ~elf_in ()
1357 {
1358 }
1359
1360public:
1361 bool is_frozen () const
1362 {
1363 return fd < 0 && hdr.pos;
1364 }
1365 bool is_freezable () const
1366 {
1367 return fd >= 0 && hdr.pos;
1368 }
1369 void freeze ();
1370 bool defrost (const char *);
1371
1372 /* If BYTES is in the mmapped area, allocate a new buffer for it. */
1373 void preserve (bytes_in &bytes ATTRIBUTE_UNUSED)
1374 {
1375#if MAPPED_READING
1376 if (hdr.buffer && bytes.buffer >= hdr.buffer
1377 && bytes.buffer < hdr.buffer + hdr.pos)
1378 {
1379 char *buf = bytes.buffer;
1380 bytes.buffer = data::simple_memory.grow (NULL, needed: bytes.size);
1381 memcpy (dest: bytes.buffer, src: buf, n: bytes.size);
1382 }
1383#endif
1384 }
1385 /* If BYTES is not in SELF's mmapped area, free it. SELF might be
1386 NULL. */
1387 static void release (elf_in *self ATTRIBUTE_UNUSED, bytes_in &bytes)
1388 {
1389#if MAPPED_READING
1390 if (!(self && self->hdr.buffer && bytes.buffer >= self->hdr.buffer
1391 && bytes.buffer < self->hdr.buffer + self->hdr.pos))
1392#endif
1393 data::simple_memory.shrink (ptr: bytes.buffer);
1394 bytes.buffer = NULL;
1395 bytes.size = 0;
1396 }
1397
1398public:
1399 static void grow (data &data, unsigned needed)
1400 {
1401 gcc_checking_assert (!data.buffer);
1402#if !MAPPED_READING
1403 data.buffer = XNEWVEC (char, needed);
1404#endif
1405 data.size = needed;
1406 }
1407 static void shrink (data &data)
1408 {
1409#if !MAPPED_READING
1410 XDELETEVEC (data.buffer);
1411#endif
1412 data.buffer = NULL;
1413 data.size = 0;
1414 }
1415
1416public:
1417 const section *get_section (unsigned s) const
1418 {
1419 if (s * sizeof (section) < sectab.size)
1420 return reinterpret_cast<const section *>
1421 (&sectab.buffer[s * sizeof (section)]);
1422 else
1423 return NULL;
1424 }
1425 unsigned get_section_limit () const
1426 {
1427 return sectab.size / sizeof (section);
1428 }
1429
1430protected:
1431 const char *read (data *, unsigned, unsigned);
1432
1433public:
1434 /* Read section by number. */
1435 bool read (data *d, const section *s)
1436 {
1437 return s && read (d, s->offset, s->size);
1438 }
1439
1440 /* Find section by name. */
1441 unsigned find (const char *name);
1442 /* Find section by index. */
1443 const section *find (unsigned snum, unsigned type = SHT_PROGBITS);
1444
1445public:
1446 /* Release the string table, when we're done with it. */
1447 void release ()
1448 {
1449 shrink (data&: strtab);
1450 }
1451
1452public:
1453 bool begin (location_t);
1454 bool end ()
1455 {
1456 release ();
1457#if MAPPED_READING
1458 if (hdr.buffer)
1459 munmap (addr: hdr.buffer, len: hdr.pos);
1460 hdr.buffer = NULL;
1461#endif
1462 shrink (data&: sectab);
1463
1464 return parent::end ();
1465 }
1466
1467public:
1468 /* Return string name at OFFSET. Checks OFFSET range. Always
1469 returns non-NULL. We know offset 0 is an empty string. */
1470 const char *name (unsigned offset)
1471 {
1472 return &strtab.buffer[offset < strtab.size ? offset : 0];
1473 }
1474};
1475
1476/* ELROND writer. */
1477
1478class elf_out : public elf, public data::allocator {
1479 typedef elf parent;
1480 /* Desired section alignment on disk. */
1481 static const int SECTION_ALIGN = 16;
1482
1483private:
1484 ptr_int_hash_map identtab; /* Map of IDENTIFIERS to strtab offsets. */
1485 unsigned pos; /* Write position in file. */
1486#if MAPPED_WRITING
1487 unsigned offset; /* Offset of the mapping. */
1488 unsigned extent; /* Length of mapping. */
1489 unsigned page_size; /* System page size. */
1490#endif
1491
1492public:
1493 elf_out (int fd, int e)
1494 :parent (fd, e), identtab (500), pos (0)
1495 {
1496#if MAPPED_WRITING
1497 offset = extent = 0;
1498 page_size = sysconf (_SC_PAGE_SIZE);
1499 if (page_size < SECTION_ALIGN)
1500 /* Something really strange. */
1501 set_error (EINVAL);
1502#endif
1503 }
1504 ~elf_out ()
1505 {
1506 data::simple_memory.shrink (obj&: hdr);
1507 data::simple_memory.shrink (obj&: sectab);
1508 data::simple_memory.shrink (obj&: strtab);
1509 }
1510
1511#if MAPPED_WRITING
1512private:
1513 void create_mapping (unsigned ext, bool extending = true);
1514 void remove_mapping ();
1515#endif
1516
1517protected:
1518 using allocator::grow;
1519 char *grow (char *, unsigned needed) final override;
1520#if MAPPED_WRITING
1521 using allocator::shrink;
1522 void shrink (char *) final override;
1523#endif
1524
1525public:
1526 unsigned get_section_limit () const
1527 {
1528 return sectab.pos / sizeof (section);
1529 }
1530
1531protected:
1532 unsigned add (unsigned type, unsigned name = 0,
1533 unsigned off = 0, unsigned size = 0, unsigned flags = SHF_NONE);
1534 unsigned write (const data &);
1535#if MAPPED_WRITING
1536 unsigned write (const bytes_out &);
1537#endif
1538
1539public:
1540 /* IDENTIFIER to strtab offset. */
1541 unsigned name (tree ident);
1542 /* String literal to strtab offset. */
1543 unsigned name (const char *n);
1544 /* Qualified name of DECL to strtab offset. */
1545 unsigned qualified_name (tree decl, bool is_defn);
1546
1547private:
1548 unsigned strtab_write (const char *s, unsigned l);
1549 void strtab_write (tree decl, int);
1550
1551public:
1552 /* Add a section with contents or strings. */
1553 unsigned add (const bytes_out &, bool string_p, unsigned name);
1554
1555public:
1556 /* Begin and end writing. */
1557 bool begin ();
1558 bool end ();
1559};
1560
1561/* Begin reading section NAME (of type PROGBITS) from SOURCE.
1562 Data always checked for CRC. */
1563
1564bool
1565bytes_in::begin (location_t loc, elf_in *source, const char *name)
1566{
1567 unsigned snum = source->find (name);
1568
1569 return begin (loc, src: source, snum, name);
1570}
1571
1572/* Begin reading section numbered SNUM with NAME (may be NULL). */
1573
1574bool
1575bytes_in::begin (location_t loc, elf_in *source, unsigned snum, const char *name)
1576{
1577 if (!source->read (d: this, s: source->find (snum))
1578 || !size || !check_crc ())
1579 {
1580 source->set_error (elf::E_BAD_DATA);
1581 source->shrink (data&: *this);
1582 if (name)
1583 error_at (loc, "section %qs is missing or corrupted", name);
1584 else
1585 error_at (loc, "section #%u is missing or corrupted", snum);
1586 return false;
1587 }
1588 pos = 4;
1589 return true;
1590}
1591
1592/* Finish reading a section. */
1593
1594bool
1595bytes_in::end (elf_in *src)
1596{
1597 if (more_p ())
1598 set_overrun ();
1599 if (overrun)
1600 src->set_error ();
1601
1602 src->shrink (data&: *this);
1603
1604 return !overrun;
1605}
1606
1607/* Begin writing buffer. */
1608
1609void
1610bytes_out::begin (bool need_crc)
1611{
1612 if (need_crc)
1613 pos = 4;
1614 memory->grow (obj&: *this, needed: 0, exact: false);
1615}
1616
1617/* Finish writing buffer. Stream out to SINK as named section NAME.
1618 Return section number or 0 on failure. If CRC_PTR is true, crc
1619 the data. Otherwise it is a string section. */
1620
1621unsigned
1622bytes_out::end (elf_out *sink, unsigned name, unsigned *crc_ptr)
1623{
1624 lengths[3] += pos;
1625 spans[3]++;
1626
1627 set_crc (crc_ptr);
1628 unsigned sec_num = sink->add (*this, string_p: !crc_ptr, name);
1629 memory->shrink (obj&: *this);
1630
1631 return sec_num;
1632}
1633
1634/* Close and open the file, without destroying it. */
1635
1636void
1637elf_in::freeze ()
1638{
1639 gcc_checking_assert (!is_frozen ());
1640#if MAPPED_READING
1641 if (munmap (addr: hdr.buffer, len: hdr.pos) < 0)
1642 set_error (errno);
1643#endif
1644 if (close (fd: fd) < 0)
1645 set_error (errno);
1646 fd = -1;
1647}
1648
1649bool
1650elf_in::defrost (const char *name)
1651{
1652 gcc_checking_assert (is_frozen ());
1653 struct stat stat;
1654
1655 fd = open (file: name, O_RDONLY | O_CLOEXEC | O_BINARY);
1656 if (fd < 0 || fstat (fd: fd, buf: &stat) < 0)
1657 set_error (errno);
1658 else
1659 {
1660 bool ok = hdr.pos == unsigned (stat.st_size);
1661#ifndef HOST_LACKS_INODE_NUMBERS
1662 if (device != stat.st_dev
1663 || inode != stat.st_ino)
1664 ok = false;
1665#endif
1666 if (!ok)
1667 set_error (EMFILE);
1668#if MAPPED_READING
1669 if (ok)
1670 {
1671 char *mapping = reinterpret_cast<char *>
1672 (mmap (NULL, len: hdr.pos, PROT_READ, MAP_SHARED, fd: fd, offset: 0));
1673 if (mapping == MAP_FAILED)
1674 fail:
1675 set_error (errno);
1676 else
1677 {
1678 if (madvise (addr: mapping, len: hdr.pos, MADV_RANDOM))
1679 goto fail;
1680
1681 /* These buffers are never NULL in this case. */
1682 strtab.buffer = mapping + strtab.pos;
1683 sectab.buffer = mapping + sectab.pos;
1684 hdr.buffer = mapping;
1685 }
1686 }
1687#endif
1688 }
1689
1690 return !get_error ();
1691}
1692
1693/* Read at current position into BUFFER. Return true on success. */
1694
1695const char *
1696elf_in::read (data *data, unsigned pos, unsigned length)
1697{
1698#if MAPPED_READING
1699 if (pos + length > hdr.pos)
1700 {
1701 set_error (EINVAL);
1702 return NULL;
1703 }
1704#else
1705 if (pos != ~0u && lseek (fd, pos, SEEK_SET) < 0)
1706 {
1707 set_error (errno);
1708 return NULL;
1709 }
1710#endif
1711 grow (data&: *data, needed: length);
1712#if MAPPED_READING
1713 data->buffer = hdr.buffer + pos;
1714#else
1715 if (::read (fd, data->buffer, data->size) != ssize_t (length))
1716 {
1717 set_error (errno);
1718 shrink (*data);
1719 return NULL;
1720 }
1721#endif
1722
1723 return data->buffer;
1724}
1725
1726/* Read section SNUM of TYPE. Return section pointer or NULL on error. */
1727
1728const elf::section *
1729elf_in::find (unsigned snum, unsigned type)
1730{
1731 const section *sec = get_section (s: snum);
1732 if (!snum || !sec || sec->type != type)
1733 return NULL;
1734 return sec;
1735}
1736
1737/* Find a section NAME and TYPE. Return section number, or zero on
1738 failure. */
1739
1740unsigned
1741elf_in::find (const char *sname)
1742{
1743 for (unsigned pos = sectab.size; pos -= sizeof (section); )
1744 {
1745 const section *sec
1746 = reinterpret_cast<const section *> (&sectab.buffer[pos]);
1747
1748 if (0 == strcmp (s1: sname, s2: name (offset: sec->name)))
1749 return pos / sizeof (section);
1750 }
1751
1752 return 0;
1753}
1754
1755/* Begin reading file. Verify header. Pull in section and string
1756 tables. Return true on success. */
1757
1758bool
1759elf_in::begin (location_t loc)
1760{
1761 if (!parent::begin ())
1762 return false;
1763
1764 struct stat stat;
1765 unsigned size = 0;
1766 if (!fstat (fd: fd, buf: &stat))
1767 {
1768#if !defined (HOST_LACKS_INODE_NUMBERS)
1769 device = stat.st_dev;
1770 inode = stat.st_ino;
1771#endif
1772 /* Never generate files > 4GB, check we've not been given one. */
1773 if (stat.st_size == unsigned (stat.st_size))
1774 size = unsigned (stat.st_size);
1775 }
1776
1777#if MAPPED_READING
1778 /* MAP_SHARED so that the file is backing store. If someone else
1779 concurrently writes it, they're wrong. */
1780 void *mapping = mmap (NULL, len: size, PROT_READ, MAP_SHARED, fd: fd, offset: 0);
1781 if (mapping == MAP_FAILED)
1782 {
1783 fail:
1784 set_error (errno);
1785 return false;
1786 }
1787 /* We'll be hopping over this randomly. Some systems declare the
1788 first parm as char *, and other declare it as void *. */
1789 if (madvise (addr: reinterpret_cast <char *> (mapping), len: size, MADV_RANDOM))
1790 goto fail;
1791
1792 hdr.buffer = (char *)mapping;
1793#else
1794 read (&hdr, 0, sizeof (header));
1795#endif
1796 hdr.pos = size; /* Record size of the file. */
1797
1798 const header *h = reinterpret_cast<const header *> (hdr.buffer);
1799 if (!h)
1800 return false;
1801
1802 if (h->ident.magic[0] != 0x7f
1803 || h->ident.magic[1] != 'E'
1804 || h->ident.magic[2] != 'L'
1805 || h->ident.magic[3] != 'F')
1806 {
1807 error_at (loc, "not Encapsulated Lazy Records of Named Declarations");
1808 failed:
1809 shrink (data&: hdr);
1810 return false;
1811 }
1812
1813 /* We expect a particular format -- the ELF is not intended to be
1814 distributable. */
1815 if (h->ident.klass != MY_CLASS
1816 || h->ident.data != MY_ENDIAN
1817 || h->ident.version != EV_CURRENT
1818 || h->type != ET_NONE
1819 || h->machine != EM_NONE
1820 || h->ident.osabi != OSABI_NONE)
1821 {
1822 error_at (loc, "unexpected encapsulation format or type");
1823 goto failed;
1824 }
1825
1826 int e = -1;
1827 if (!h->shoff || h->shentsize != sizeof (section))
1828 {
1829 malformed:
1830 set_error (e);
1831 error_at (loc, "encapsulation is malformed");
1832 goto failed;
1833 }
1834
1835 unsigned strndx = h->shstrndx;
1836 unsigned shnum = h->shnum;
1837 if (shnum == SHN_XINDEX)
1838 {
1839 if (!read (data: &sectab, pos: h->shoff, length: sizeof (section)))
1840 {
1841 section_table_fail:
1842 e = errno;
1843 goto malformed;
1844 }
1845 shnum = get_section (s: 0)->size;
1846 /* Freeing does mean we'll re-read it in the case we're not
1847 mapping, but this is going to be rare. */
1848 shrink (data&: sectab);
1849 }
1850
1851 if (!shnum)
1852 goto malformed;
1853
1854 if (!read (data: &sectab, pos: h->shoff, length: shnum * sizeof (section)))
1855 goto section_table_fail;
1856
1857 if (strndx == SHN_XINDEX)
1858 strndx = get_section (s: 0)->link;
1859
1860 if (!read (d: &strtab, s: find (snum: strndx, type: SHT_STRTAB)))
1861 goto malformed;
1862
1863 /* The string table should be at least one byte, with NUL chars
1864 at either end. */
1865 if (!(strtab.size && !strtab.buffer[0]
1866 && !strtab.buffer[strtab.size - 1]))
1867 goto malformed;
1868
1869#if MAPPED_READING
1870 /* Record the offsets of the section and string tables. */
1871 sectab.pos = h->shoff;
1872 strtab.pos = shnum * sizeof (section);
1873#else
1874 shrink (hdr);
1875#endif
1876
1877 return true;
1878}
1879
1880/* Create a new mapping. */
1881
1882#if MAPPED_WRITING
1883void
1884elf_out::create_mapping (unsigned ext, bool extending)
1885{
1886#ifndef HAVE_POSIX_FALLOCATE
1887#define posix_fallocate(fd,off,len) ftruncate (fd, off + len)
1888#endif
1889 void *mapping = MAP_FAILED;
1890 if (extending && ext < 1024 * 1024)
1891 {
1892 if (!posix_fallocate (fd: fd, offset: offset, len: ext * 2))
1893 mapping = mmap (NULL, len: ext * 2, PROT_READ | PROT_WRITE,
1894 MAP_SHARED, fd: fd, offset: offset);
1895 if (mapping != MAP_FAILED)
1896 ext *= 2;
1897 }
1898 if (mapping == MAP_FAILED)
1899 {
1900 if (!extending || !posix_fallocate (fd: fd, offset: offset, len: ext))
1901 mapping = mmap (NULL, len: ext, PROT_READ | PROT_WRITE,
1902 MAP_SHARED, fd: fd, offset: offset);
1903 if (mapping == MAP_FAILED)
1904 {
1905 set_error (errno);
1906 mapping = NULL;
1907 ext = 0;
1908 }
1909 }
1910#undef posix_fallocate
1911 hdr.buffer = (char *)mapping;
1912 extent = ext;
1913}
1914#endif
1915
1916/* Flush out the current mapping. */
1917
1918#if MAPPED_WRITING
1919void
1920elf_out::remove_mapping ()
1921{
1922 if (hdr.buffer)
1923 {
1924 /* MS_ASYNC dtrt with the removed mapping, including a
1925 subsequent overlapping remap. */
1926 if (msync (addr: hdr.buffer, len: extent, MS_ASYNC)
1927 || munmap (addr: hdr.buffer, len: extent))
1928 /* We're somewhat screwed at this point. */
1929 set_error (errno);
1930 }
1931
1932 hdr.buffer = NULL;
1933}
1934#endif
1935
1936/* Grow a mapping of PTR to be NEEDED bytes long. This gets
1937 interesting if the new size grows the EXTENT. */
1938
1939char *
1940elf_out::grow (char *data, unsigned needed)
1941{
1942 if (!data)
1943 {
1944 /* First allocation, check we're aligned. */
1945 gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
1946#if MAPPED_WRITING
1947 data = hdr.buffer + (pos - offset);
1948#endif
1949 }
1950
1951#if MAPPED_WRITING
1952 unsigned off = data - hdr.buffer;
1953 if (off + needed > extent)
1954 {
1955 /* We need to grow the mapping. */
1956 unsigned lwm = off & ~(page_size - 1);
1957 unsigned hwm = (off + needed + page_size - 1) & ~(page_size - 1);
1958
1959 gcc_checking_assert (hwm > extent);
1960
1961 remove_mapping ();
1962
1963 offset += lwm;
1964 create_mapping (ext: extent < hwm - lwm ? hwm - lwm : extent);
1965
1966 data = hdr.buffer + (off - lwm);
1967 }
1968#else
1969 data = allocator::grow (data, needed);
1970#endif
1971
1972 return data;
1973}
1974
1975#if MAPPED_WRITING
1976/* Shrinking is a NOP. */
1977void
1978elf_out::shrink (char *)
1979{
1980}
1981#endif
1982
1983/* Write S of length L to the strtab buffer. L must include the ending
1984 NUL, if that's what you want. */
1985
1986unsigned
1987elf_out::strtab_write (const char *s, unsigned l)
1988{
1989 if (strtab.pos + l > strtab.size)
1990 data::simple_memory.grow (obj&: strtab, needed: strtab.pos + l, exact: false);
1991 memcpy (dest: strtab.buffer + strtab.pos, src: s, n: l);
1992 unsigned res = strtab.pos;
1993 strtab.pos += l;
1994 return res;
1995}
1996
1997/* Write qualified name of decl. INNER >0 if this is a definition, <0
1998 if this is a qualifier of an outer name. */
1999
2000void
2001elf_out::strtab_write (tree decl, int inner)
2002{
2003 tree ctx = CP_DECL_CONTEXT (decl);
2004 if (TYPE_P (ctx))
2005 ctx = TYPE_NAME (ctx);
2006 if (ctx != global_namespace)
2007 strtab_write (decl: ctx, inner: -1);
2008
2009 tree name = DECL_NAME (decl);
2010 if (!name)
2011 name = DECL_ASSEMBLER_NAME_RAW (decl);
2012 strtab_write (IDENTIFIER_POINTER (name), IDENTIFIER_LENGTH (name));
2013
2014 if (inner)
2015 strtab_write (s: &"::{}"[inner+1], l: 2);
2016}
2017
2018/* Map IDENTIFIER IDENT to strtab offset. Inserts into strtab if not
2019 already there. */
2020
2021unsigned
2022elf_out::name (tree ident)
2023{
2024 unsigned res = 0;
2025 if (ident)
2026 {
2027 bool existed;
2028 int *slot = &identtab.get_or_insert (k: ident, existed: &existed);
2029 if (!existed)
2030 *slot = strtab_write (IDENTIFIER_POINTER (ident),
2031 IDENTIFIER_LENGTH (ident) + 1);
2032 res = *slot;
2033 }
2034 return res;
2035}
2036
2037/* Map LITERAL to strtab offset. Does not detect duplicates and
2038 expects LITERAL to remain live until strtab is written out. */
2039
2040unsigned
2041elf_out::name (const char *literal)
2042{
2043 return strtab_write (s: literal, l: strlen (s: literal) + 1);
2044}
2045
2046/* Map a DECL's qualified name to strtab offset. Does not detect
2047 duplicates. */
2048
2049unsigned
2050elf_out::qualified_name (tree decl, bool is_defn)
2051{
2052 gcc_checking_assert (DECL_P (decl) && decl != global_namespace);
2053 unsigned result = strtab.pos;
2054
2055 strtab_write (decl, inner: is_defn);
2056 strtab_write (s: "", l: 1);
2057
2058 return result;
2059}
2060
2061/* Add section to file. Return section number. TYPE & NAME identify
2062 the section. OFF and SIZE identify the file location of its
2063 data. FLAGS contains additional info. */
2064
2065unsigned
2066elf_out::add (unsigned type, unsigned name, unsigned off, unsigned size,
2067 unsigned flags)
2068{
2069 gcc_checking_assert (!(off & (SECTION_ALIGN - 1)));
2070 if (sectab.pos + sizeof (section) > sectab.size)
2071 data::simple_memory.grow (obj&: sectab, needed: sectab.pos + sizeof (section), exact: false);
2072 section *sec = reinterpret_cast<section *> (sectab.buffer + sectab.pos);
2073 memset (s: sec, c: 0, n: sizeof (section));
2074 sec->type = type;
2075 sec->flags = flags;
2076 sec->name = name;
2077 sec->offset = off;
2078 sec->size = size;
2079 if (flags & SHF_STRINGS)
2080 sec->entsize = 1;
2081
2082 unsigned res = sectab.pos;
2083 sectab.pos += sizeof (section);
2084 return res / sizeof (section);
2085}
2086
2087/* Pad to the next alignment boundary, then write BUFFER to disk.
2088 Return the position of the start of the write, or zero on failure. */
2089
2090unsigned
2091elf_out::write (const data &buffer)
2092{
2093#if MAPPED_WRITING
2094 /* HDR is always mapped. */
2095 if (&buffer != &hdr)
2096 {
2097 bytes_out out (this);
2098 grow (obj&: out, needed: buffer.pos, exact: true);
2099 if (out.buffer)
2100 memcpy (dest: out.buffer, src: buffer.buffer, n: buffer.pos);
2101 shrink (obj&: out);
2102 }
2103 else
2104 /* We should have been aligned during the first allocation. */
2105 gcc_checking_assert (!(pos & (SECTION_ALIGN - 1)));
2106#else
2107 if (::write (fd, buffer.buffer, buffer.pos) != ssize_t (buffer.pos))
2108 {
2109 set_error (errno);
2110 return 0;
2111 }
2112#endif
2113 unsigned res = pos;
2114 pos += buffer.pos;
2115
2116 if (unsigned padding = -pos & (SECTION_ALIGN - 1))
2117 {
2118#if !MAPPED_WRITING
2119 /* Align the section on disk, should help the necessary copies.
2120 fseeking to extend is non-portable. */
2121 static char zero[SECTION_ALIGN];
2122 if (::write (fd, &zero, padding) != ssize_t (padding))
2123 set_error (errno);
2124#endif
2125 pos += padding;
2126 }
2127 return res;
2128}
2129
2130/* Write a streaming buffer. It must be using us as an allocator. */
2131
2132#if MAPPED_WRITING
2133unsigned
2134elf_out::write (const bytes_out &buf)
2135{
2136 gcc_checking_assert (buf.memory == this);
2137 /* A directly mapped buffer. */
2138 gcc_checking_assert (buf.buffer - hdr.buffer >= 0
2139 && buf.buffer - hdr.buffer + buf.size <= extent);
2140 unsigned res = pos;
2141 pos += buf.pos;
2142
2143 /* Align up. We're not going to advance into the next page. */
2144 pos += -pos & (SECTION_ALIGN - 1);
2145
2146 return res;
2147}
2148#endif
2149
2150/* Write data and add section. STRING_P is true for a string
2151 section, false for PROGBITS. NAME identifies the section (0 is the
2152 empty name). DATA is the contents. Return section number or 0 on
2153 failure (0 is the undef section). */
2154
2155unsigned
2156elf_out::add (const bytes_out &data, bool string_p, unsigned name)
2157{
2158 unsigned off = write (buf: data);
2159
2160 return add (type: string_p ? SHT_STRTAB : SHT_PROGBITS, name,
2161 off, size: data.pos, flags: string_p ? SHF_STRINGS : SHF_NONE);
2162}
2163
2164/* Begin writing the file. Initialize the section table and write an
2165 empty header. Return false on failure. */
2166
2167bool
2168elf_out::begin ()
2169{
2170 if (!parent::begin ())
2171 return false;
2172
2173 /* Let the allocators pick a default. */
2174 data::simple_memory.grow (obj&: strtab, needed: 0, exact: false);
2175 data::simple_memory.grow (obj&: sectab, needed: 0, exact: false);
2176
2177 /* The string table starts with an empty string. */
2178 name (literal: "");
2179
2180 /* Create the UNDEF section. */
2181 add (type: SHT_NONE);
2182
2183#if MAPPED_WRITING
2184 /* Start a mapping. */
2185 create_mapping (EXPERIMENT (page_size,
2186 (32767 + page_size) & ~(page_size - 1)));
2187 if (!hdr.buffer)
2188 return false;
2189#endif
2190
2191 /* Write an empty header. */
2192 grow (obj&: hdr, needed: sizeof (header), exact: true);
2193 header *h = reinterpret_cast<header *> (hdr.buffer);
2194 memset (s: h, c: 0, n: sizeof (header));
2195 hdr.pos = hdr.size;
2196 write (buffer: hdr);
2197 return !get_error ();
2198}
2199
2200/* Finish writing the file. Write out the string & section tables.
2201 Fill in the header. Return true on error. */
2202
2203bool
2204elf_out::end ()
2205{
2206 if (fd >= 0)
2207 {
2208 /* Write the string table. */
2209 unsigned strnam = name (literal: ".strtab");
2210 unsigned stroff = write (buffer: strtab);
2211 unsigned strndx = add (type: SHT_STRTAB, name: strnam, off: stroff, size: strtab.pos,
2212 flags: SHF_STRINGS);
2213
2214 /* Store escape values in section[0]. */
2215 if (strndx >= SHN_LORESERVE)
2216 {
2217 reinterpret_cast<section *> (sectab.buffer)->link = strndx;
2218 strndx = SHN_XINDEX;
2219 }
2220 unsigned shnum = sectab.pos / sizeof (section);
2221 if (shnum >= SHN_LORESERVE)
2222 {
2223 reinterpret_cast<section *> (sectab.buffer)->size = shnum;
2224 shnum = SHN_XINDEX;
2225 }
2226
2227 unsigned shoff = write (buffer: sectab);
2228
2229#if MAPPED_WRITING
2230 if (offset)
2231 {
2232 remove_mapping ();
2233 offset = 0;
2234 create_mapping (ext: (sizeof (header) + page_size - 1) & ~(page_size - 1),
2235 extending: false);
2236 }
2237 unsigned length = pos;
2238#else
2239 if (lseek (fd, 0, SEEK_SET) < 0)
2240 set_error (errno);
2241#endif
2242 /* Write header. */
2243 if (!get_error ())
2244 {
2245 /* Write the correct header now. */
2246 header *h = reinterpret_cast<header *> (hdr.buffer);
2247 h->ident.magic[0] = 0x7f;
2248 h->ident.magic[1] = 'E'; /* Elrond */
2249 h->ident.magic[2] = 'L'; /* is an */
2250 h->ident.magic[3] = 'F'; /* elf. */
2251 h->ident.klass = MY_CLASS;
2252 h->ident.data = MY_ENDIAN;
2253 h->ident.version = EV_CURRENT;
2254 h->ident.osabi = OSABI_NONE;
2255 h->type = ET_NONE;
2256 h->machine = EM_NONE;
2257 h->version = EV_CURRENT;
2258 h->shoff = shoff;
2259 h->ehsize = sizeof (header);
2260 h->shentsize = sizeof (section);
2261 h->shnum = shnum;
2262 h->shstrndx = strndx;
2263
2264 pos = 0;
2265 write (buffer: hdr);
2266 }
2267
2268#if MAPPED_WRITING
2269 remove_mapping ();
2270 if (ftruncate (fd: fd, length: length))
2271 set_error (errno);
2272#endif
2273 }
2274
2275 data::simple_memory.shrink (obj&: sectab);
2276 data::simple_memory.shrink (obj&: strtab);
2277
2278 return parent::end ();
2279}
2280
2281/********************************************************************/
2282
2283/* A dependency set. This is used during stream out to determine the
2284 connectivity of the graph. Every namespace-scope declaration that
2285 needs writing has a depset. The depset is filled with the (depsets
2286 of) declarations within this module that it references. For a
2287 declaration that'll generally be named types. For definitions
2288 it'll also be declarations in the body.
2289
2290 From that we can convert the graph to a DAG, via determining the
2291 Strongly Connected Clusters. Each cluster is streamed
2292 independently, and thus we achieve lazy loading.
2293
2294 Other decls that get a depset are namespaces themselves and
2295 unnameable declarations. */
2296
2297class depset {
2298private:
2299 tree entity; /* Entity, or containing namespace. */
2300 uintptr_t discriminator; /* Flags or identifier. */
2301
2302public:
2303 /* The kinds of entity the depset could describe. The ordering is
2304 significant, see entity_kind_name. */
2305 enum entity_kind
2306 {
2307 EK_DECL, /* A decl. */
2308 EK_SPECIALIZATION, /* A specialization. */
2309 EK_PARTIAL, /* A partial specialization. */
2310 EK_USING, /* A using declaration (at namespace scope). */
2311 EK_NAMESPACE, /* A namespace. */
2312 EK_REDIRECT, /* Redirect to a template_decl. */
2313 EK_EXPLICIT_HWM,
2314 EK_BINDING = EK_EXPLICIT_HWM, /* Implicitly encoded. */
2315 EK_FOR_BINDING, /* A decl being inserted for a binding. */
2316 EK_INNER_DECL, /* A decl defined outside of its imported
2317 context. */
2318 EK_DIRECT_HWM = EK_PARTIAL + 1,
2319
2320 EK_BITS = 3 /* Only need to encode below EK_EXPLICIT_HWM. */
2321 };
2322
2323private:
2324 /* Placement of bit fields in discriminator. */
2325 enum disc_bits
2326 {
2327 DB_ZERO_BIT, /* Set to disambiguate identifier from flags */
2328 DB_SPECIAL_BIT, /* First dep slot is special. */
2329 DB_KIND_BIT, /* Kind of the entity. */
2330 DB_KIND_BITS = EK_BITS,
2331 DB_DEFN_BIT = DB_KIND_BIT + DB_KIND_BITS,
2332 DB_IS_MEMBER_BIT, /* Is an out-of-class member. */
2333 DB_IS_INTERNAL_BIT, /* It is an (erroneous)
2334 internal-linkage entity. */
2335 DB_REFS_INTERNAL_BIT, /* Refers to an internal-linkage
2336 entity. */
2337 DB_IMPORTED_BIT, /* An imported entity. */
2338 DB_UNREACHED_BIT, /* A yet-to-be reached entity. */
2339 DB_HIDDEN_BIT, /* A hidden binding. */
2340 /* The following bits are not independent, but enumerating them is
2341 awkward. */
2342 DB_TYPE_SPEC_BIT, /* Specialization in the type table. */
2343 DB_FRIEND_SPEC_BIT, /* An instantiated template friend. */
2344 };
2345
2346public:
2347 /* The first slot is special for EK_SPECIALIZATIONS it is a
2348 spec_entry pointer. It is not relevant for the SCC
2349 determination. */
2350 vec<depset *> deps; /* Depsets we reference. */
2351
2352public:
2353 unsigned cluster; /* Strongly connected cluster, later entity number */
2354 unsigned section; /* Section written to. */
2355 /* During SCC construction, section is lowlink, until the depset is
2356 removed from the stack. See Tarjan algorithm for details. */
2357
2358private:
2359 /* Construction via factories. Destruction via hash traits. */
2360 depset (tree entity);
2361 ~depset ();
2362
2363public:
2364 static depset *make_binding (tree, tree);
2365 static depset *make_entity (tree, entity_kind, bool = false);
2366 /* Late setting a binding name -- /then/ insert into hash! */
2367 inline void set_binding_name (tree name)
2368 {
2369 gcc_checking_assert (!get_name ());
2370 discriminator = reinterpret_cast<uintptr_t> (name);
2371 }
2372
2373private:
2374 template<unsigned I> void set_flag_bit ()
2375 {
2376 gcc_checking_assert (I < 2 || !is_binding ());
2377 discriminator |= 1u << I;
2378 }
2379 template<unsigned I> void clear_flag_bit ()
2380 {
2381 gcc_checking_assert (I < 2 || !is_binding ());
2382 discriminator &= ~(1u << I);
2383 }
2384 template<unsigned I> bool get_flag_bit () const
2385 {
2386 gcc_checking_assert (I < 2 || !is_binding ());
2387 return bool ((discriminator >> I) & 1);
2388 }
2389
2390public:
2391 bool is_binding () const
2392 {
2393 return !get_flag_bit<DB_ZERO_BIT> ();
2394 }
2395 entity_kind get_entity_kind () const
2396 {
2397 if (is_binding ())
2398 return EK_BINDING;
2399 return entity_kind ((discriminator >> DB_KIND_BIT) & ((1u << EK_BITS) - 1));
2400 }
2401 const char *entity_kind_name () const;
2402
2403public:
2404 bool has_defn () const
2405 {
2406 return get_flag_bit<DB_DEFN_BIT> ();
2407 }
2408
2409public:
2410 /* This class-member is defined here, but the class was imported. */
2411 bool is_member () const
2412 {
2413 gcc_checking_assert (get_entity_kind () == EK_DECL);
2414 return get_flag_bit<DB_IS_MEMBER_BIT> ();
2415 }
2416public:
2417 bool is_internal () const
2418 {
2419 return get_flag_bit<DB_IS_INTERNAL_BIT> ();
2420 }
2421 bool refs_internal () const
2422 {
2423 return get_flag_bit<DB_REFS_INTERNAL_BIT> ();
2424 }
2425 bool is_import () const
2426 {
2427 return get_flag_bit<DB_IMPORTED_BIT> ();
2428 }
2429 bool is_unreached () const
2430 {
2431 return get_flag_bit<DB_UNREACHED_BIT> ();
2432 }
2433 bool is_hidden () const
2434 {
2435 return get_flag_bit<DB_HIDDEN_BIT> ();
2436 }
2437 bool is_type_spec () const
2438 {
2439 return get_flag_bit<DB_TYPE_SPEC_BIT> ();
2440 }
2441 bool is_friend_spec () const
2442 {
2443 return get_flag_bit<DB_FRIEND_SPEC_BIT> ();
2444 }
2445
2446public:
2447 /* We set these bit outside of depset. */
2448 void set_hidden_binding ()
2449 {
2450 set_flag_bit<DB_HIDDEN_BIT> ();
2451 }
2452 void clear_hidden_binding ()
2453 {
2454 clear_flag_bit<DB_HIDDEN_BIT> ();
2455 }
2456
2457public:
2458 bool is_special () const
2459 {
2460 return get_flag_bit<DB_SPECIAL_BIT> ();
2461 }
2462 void set_special ()
2463 {
2464 set_flag_bit<DB_SPECIAL_BIT> ();
2465 }
2466
2467public:
2468 tree get_entity () const
2469 {
2470 return entity;
2471 }
2472 tree get_name () const
2473 {
2474 gcc_checking_assert (is_binding ());
2475 return reinterpret_cast <tree> (discriminator);
2476 }
2477
2478public:
2479 /* Traits for a hash table of pointers to bindings. */
2480 struct traits {
2481 /* Each entry is a pointer to a depset. */
2482 typedef depset *value_type;
2483 /* We lookup by container:maybe-identifier pair. */
2484 typedef std::pair<tree,tree> compare_type;
2485
2486 static const bool empty_zero_p = true;
2487
2488 /* hash and equality for compare_type. */
2489 inline static hashval_t hash (const compare_type &p)
2490 {
2491 hashval_t h = pointer_hash<tree_node>::hash (candidate: p.first);
2492 if (p.second)
2493 {
2494 hashval_t nh = IDENTIFIER_HASH_VALUE (p.second);
2495 h = iterative_hash_hashval_t (val: h, val2: nh);
2496 }
2497 return h;
2498 }
2499 inline static bool equal (const value_type b, const compare_type &p)
2500 {
2501 if (b->entity != p.first)
2502 return false;
2503
2504 if (p.second)
2505 return b->discriminator == reinterpret_cast<uintptr_t> (p.second);
2506 else
2507 return !b->is_binding ();
2508 }
2509
2510 /* (re)hasher for a binding itself. */
2511 inline static hashval_t hash (const value_type b)
2512 {
2513 hashval_t h = pointer_hash<tree_node>::hash (candidate: b->entity);
2514 if (b->is_binding ())
2515 {
2516 hashval_t nh = IDENTIFIER_HASH_VALUE (b->get_name ());
2517 h = iterative_hash_hashval_t (val: h, val2: nh);
2518 }
2519 return h;
2520 }
2521
2522 /* Empty via NULL. */
2523 static inline void mark_empty (value_type &p) {p = NULL;}
2524 static inline bool is_empty (value_type p) {return !p;}
2525
2526 /* Nothing is deletable. Everything is insertable. */
2527 static bool is_deleted (value_type) { return false; }
2528 static void mark_deleted (value_type) { gcc_unreachable (); }
2529
2530 /* We own the entities in the hash table. */
2531 static void remove (value_type p)
2532 {
2533 delete (p);
2534 }
2535 };
2536
2537public:
2538 class hash : public hash_table<traits> {
2539 typedef traits::compare_type key_t;
2540 typedef hash_table<traits> parent;
2541
2542 public:
2543 vec<depset *> worklist; /* Worklist of decls to walk. */
2544 hash *chain; /* Original table. */
2545 depset *current; /* Current depset being depended. */
2546 unsigned section; /* When writing out, the section. */
2547 bool reached_unreached; /* We reached an unreached entity. */
2548
2549 public:
2550 hash (size_t size, hash *c = NULL)
2551 : parent (size), chain (c), current (NULL), section (0),
2552 reached_unreached (false)
2553 {
2554 worklist.create (nelems: size);
2555 }
2556 ~hash ()
2557 {
2558 worklist.release ();
2559 }
2560
2561 public:
2562 bool is_key_order () const
2563 {
2564 return chain != NULL;
2565 }
2566
2567 private:
2568 depset **entity_slot (tree entity, bool = true);
2569 depset **binding_slot (tree ctx, tree name, bool = true);
2570 depset *maybe_add_declaration (tree decl);
2571
2572 public:
2573 depset *find_dependency (tree entity);
2574 depset *find_binding (tree ctx, tree name);
2575 depset *make_dependency (tree decl, entity_kind);
2576 void add_dependency (depset *);
2577
2578 public:
2579 void add_mergeable (depset *);
2580 depset *add_dependency (tree decl, entity_kind);
2581 void add_namespace_context (depset *, tree ns);
2582
2583 private:
2584 static bool add_binding_entity (tree, WMB_Flags, void *);
2585
2586 public:
2587 bool add_namespace_entities (tree ns, bitmap partitions);
2588 void add_specializations (bool decl_p);
2589 void add_partial_entities (vec<tree, va_gc> *);
2590 void add_class_entities (vec<tree, va_gc> *);
2591
2592 public:
2593 void find_dependencies (module_state *);
2594 bool finalize_dependencies ();
2595 vec<depset *> connect ();
2596 };
2597
2598public:
2599 struct tarjan {
2600 vec<depset *> result;
2601 vec<depset *> stack;
2602 unsigned index;
2603
2604 tarjan (unsigned size)
2605 : index (0)
2606 {
2607 result.create (nelems: size);
2608 stack.create (nelems: 50);
2609 }
2610 ~tarjan ()
2611 {
2612 gcc_assert (!stack.length ());
2613 stack.release ();
2614 }
2615
2616 public:
2617 void connect (depset *);
2618 };
2619};
2620
2621inline
2622depset::depset (tree entity)
2623 :entity (entity), discriminator (0), cluster (0), section (0)
2624{
2625 deps.create (nelems: 0);
2626}
2627
2628inline
2629depset::~depset ()
2630{
2631 deps.release ();
2632}
2633
2634const char *
2635depset::entity_kind_name () const
2636{
2637 /* Same order as entity_kind. */
2638 static const char *const names[] =
2639 {"decl", "specialization", "partial", "using",
2640 "namespace", "redirect", "binding"};
2641 entity_kind kind = get_entity_kind ();
2642 gcc_checking_assert (kind < ARRAY_SIZE (names));
2643 return names[kind];
2644}
2645
2646/* Create a depset for a namespace binding NS::NAME. */
2647
2648depset *depset::make_binding (tree ns, tree name)
2649{
2650 depset *binding = new depset (ns);
2651
2652 binding->discriminator = reinterpret_cast <uintptr_t> (name);
2653
2654 return binding;
2655}
2656
2657depset *depset::make_entity (tree entity, entity_kind ek, bool is_defn)
2658{
2659 depset *r = new depset (entity);
2660
2661 r->discriminator = ((1 << DB_ZERO_BIT)
2662 | (ek << DB_KIND_BIT)
2663 | is_defn << DB_DEFN_BIT);
2664
2665 return r;
2666}
2667
2668class pending_key
2669{
2670public:
2671 tree ns;
2672 tree id;
2673};
2674
2675template<>
2676struct default_hash_traits<pending_key>
2677{
2678 using value_type = pending_key;
2679
2680 static const bool empty_zero_p = false;
2681 static hashval_t hash (const value_type &k)
2682 {
2683 hashval_t h = IDENTIFIER_HASH_VALUE (k.id);
2684 h = iterative_hash_hashval_t (DECL_UID (k.ns), val2: h);
2685
2686 return h;
2687 }
2688 static bool equal (const value_type &k, const value_type &l)
2689 {
2690 return k.ns == l.ns && k.id == l.id;
2691 }
2692 static void mark_empty (value_type &k)
2693 {
2694 k.ns = k.id = NULL_TREE;
2695 }
2696 static void mark_deleted (value_type &k)
2697 {
2698 k.ns = NULL_TREE;
2699 gcc_checking_assert (k.id);
2700 }
2701 static bool is_empty (const value_type &k)
2702 {
2703 return k.ns == NULL_TREE && k.id == NULL_TREE;
2704 }
2705 static bool is_deleted (const value_type &k)
2706 {
2707 return k.ns == NULL_TREE && k.id != NULL_TREE;
2708 }
2709 static void remove (value_type &)
2710 {
2711 }
2712};
2713
2714typedef hash_map<pending_key, auto_vec<unsigned>> pending_map_t;
2715
2716/* Not-loaded entities that are keyed to a namespace-scope
2717 identifier. See module_state::write_pendings for details. */
2718pending_map_t *pending_table;
2719
2720/* Decls that need some post processing once a batch of lazy loads has
2721 completed. */
2722vec<tree, va_heap, vl_embed> *post_load_decls;
2723
2724/* Some entities are keyed to another entitity for ODR purposes.
2725 For example, at namespace scope, 'inline auto var = []{};', that
2726 lambda is keyed to 'var', and follows its ODRness. */
2727typedef hash_map<tree, auto_vec<tree>> keyed_map_t;
2728static keyed_map_t *keyed_table;
2729
2730/********************************************************************/
2731/* Tree streaming. The tree streaming is very specific to the tree
2732 structures themselves. A tag indicates the kind of tree being
2733 streamed. -ve tags indicate backreferences to already-streamed
2734 trees. Backreferences are auto-numbered. */
2735
2736/* Tree tags. */
2737enum tree_tag {
2738 tt_null, /* NULL_TREE. */
2739 tt_fixed, /* Fixed vector index. */
2740
2741 tt_node, /* By-value node. */
2742 tt_decl, /* By-value mergeable decl. */
2743 tt_tpl_parm, /* Template parm. */
2744
2745 /* The ordering of the following 4 is relied upon in
2746 trees_out::tree_node. */
2747 tt_id, /* Identifier node. */
2748 tt_conv_id, /* Conversion operator name. */
2749 tt_anon_id, /* Anonymous name. */
2750 tt_lambda_id, /* Lambda name. */
2751
2752 tt_typedef_type, /* A (possibly implicit) typedefed type. */
2753 tt_derived_type, /* A type derived from another type. */
2754 tt_variant_type, /* A variant of another type. */
2755
2756 tt_tinfo_var, /* Typeinfo object. */
2757 tt_tinfo_typedef, /* Typeinfo typedef. */
2758 tt_ptrmem_type, /* Pointer to member type. */
2759 tt_nttp_var, /* NTTP_OBJECT VAR_DECL. */
2760
2761 tt_parm, /* Function parameter or result. */
2762 tt_enum_value, /* An enum value. */
2763 tt_enum_decl, /* An enum decl. */
2764 tt_data_member, /* Data member/using-decl. */
2765
2766 tt_binfo, /* A BINFO. */
2767 tt_vtable, /* A vtable. */
2768 tt_thunk, /* A thunk. */
2769 tt_clone_ref,
2770
2771 tt_entity, /* A extra-cluster entity. */
2772
2773 tt_template, /* The TEMPLATE_RESULT of a template. */
2774};
2775
2776enum walk_kind {
2777 WK_none, /* No walk to do (a back- or fixed-ref happened). */
2778 WK_normal, /* Normal walk (by-name if possible). */
2779
2780 WK_value, /* By-value walk. */
2781};
2782
2783enum merge_kind
2784{
2785 MK_unique, /* Known unique. */
2786 MK_named, /* Found by CTX, NAME + maybe_arg types etc. */
2787 MK_field, /* Found by CTX and index on TYPE_FIELDS */
2788 MK_vtable, /* Found by CTX and index on TYPE_VTABLES */
2789 MK_as_base, /* Found by CTX. */
2790
2791 MK_partial,
2792
2793 MK_enum, /* Found by CTX, & 1stMemberNAME. */
2794 MK_keyed, /* Found by key & index. */
2795 MK_local_type, /* Found by CTX, index. */
2796
2797 MK_friend_spec, /* Like named, but has a tmpl & args too. */
2798 MK_local_friend, /* Found by CTX, index. */
2799
2800 MK_indirect_lwm = MK_enum,
2801
2802 /* Template specialization kinds below. These are all found via
2803 primary template and specialization args. */
2804 MK_template_mask = 0x10, /* A template specialization. */
2805
2806 MK_tmpl_decl_mask = 0x4, /* In decl table. */
2807
2808 MK_tmpl_tmpl_mask = 0x1, /* We want TEMPLATE_DECL. */
2809
2810 MK_type_spec = MK_template_mask,
2811 MK_decl_spec = MK_template_mask | MK_tmpl_decl_mask,
2812
2813 MK_hwm = 0x20
2814};
2815/* This is more than a debugging array. NULLs are used to determine
2816 an invalid merge_kind number. */
2817static char const *const merge_kind_name[MK_hwm] =
2818 {
2819 "unique", "named", "field", "vtable", /* 0...3 */
2820 "asbase", "partial", "enum", "attached", /* 4...7 */
2821
2822 "local type", "friend spec", "local friend", NULL, /* 8...11 */
2823 NULL, NULL, NULL, NULL,
2824
2825 "type spec", "type tmpl spec", /* 16,17 type (template). */
2826 NULL, NULL,
2827
2828 "decl spec", "decl tmpl spec", /* 20,21 decl (template). */
2829 NULL, NULL,
2830 NULL, NULL, NULL, NULL,
2831 NULL, NULL, NULL, NULL,
2832 };
2833
2834/* Mergeable entity location data. */
2835struct merge_key {
2836 cp_ref_qualifier ref_q : 2;
2837 unsigned index;
2838
2839 tree ret; /* Return type, if appropriate. */
2840 tree args; /* Arg types, if appropriate. */
2841
2842 tree constraints; /* Constraints. */
2843
2844 merge_key ()
2845 :ref_q (REF_QUAL_NONE), index (0),
2846 ret (NULL_TREE), args (NULL_TREE),
2847 constraints (NULL_TREE)
2848 {
2849 }
2850};
2851
2852/* Hashmap of merged duplicates. Usually decls, but can contain
2853 BINFOs. */
2854typedef hash_map<tree,uintptr_t,
2855 simple_hashmap_traits<nodel_ptr_hash<tree_node>,uintptr_t> >
2856duplicate_hash_map;
2857
2858/* Data needed for post-processing. */
2859struct post_process_data {
2860 tree decl;
2861 location_t start_locus;
2862 location_t end_locus;
2863};
2864
2865/* Tree stream reader. Note that reading a stream doesn't mark the
2866 read trees with TREE_VISITED. Thus it's quite safe to have
2867 multiple concurrent readers. Which is good, because lazy
2868 loading.
2869
2870 It's important that trees_in/out have internal linkage so that the
2871 compiler knows core_bools, lang_type_bools and lang_decl_bools have
2872 only a single caller (tree_node_bools) and inlines them appropriately. */
2873namespace {
2874class trees_in : public bytes_in {
2875 typedef bytes_in parent;
2876
2877private:
2878 module_state *state; /* Module being imported. */
2879 vec<tree> back_refs; /* Back references. */
2880 duplicate_hash_map *duplicates; /* Map from existings to duplicate. */
2881 vec<post_process_data> post_decls; /* Decls to post process. */
2882 unsigned unused; /* Inhibit any interior TREE_USED
2883 marking. */
2884
2885public:
2886 trees_in (module_state *);
2887 ~trees_in ();
2888
2889public:
2890 int insert (tree);
2891 tree back_ref (int);
2892
2893private:
2894 tree start (unsigned = 0);
2895
2896public:
2897 /* Needed for binfo writing */
2898 bool core_bools (tree, bits_in&);
2899
2900private:
2901 /* Stream tree_core, lang_decl_specific and lang_type_specific
2902 bits. */
2903 bool core_vals (tree);
2904 bool lang_type_bools (tree, bits_in&);
2905 bool lang_type_vals (tree);
2906 bool lang_decl_bools (tree, bits_in&);
2907 bool lang_decl_vals (tree);
2908 bool lang_vals (tree);
2909 bool tree_node_bools (tree);
2910 bool tree_node_vals (tree);
2911 tree tree_value ();
2912 tree decl_value ();
2913 tree tpl_parm_value ();
2914
2915private:
2916 tree chained_decls (); /* Follow DECL_CHAIN. */
2917 vec<tree, va_heap> *vec_chained_decls ();
2918 vec<tree, va_gc> *tree_vec (); /* vec of tree. */
2919 vec<tree_pair_s, va_gc> *tree_pair_vec (); /* vec of tree_pair. */
2920 tree tree_list (bool has_purpose);
2921
2922public:
2923 /* Read a tree node. */
2924 tree tree_node (bool is_use = false);
2925
2926private:
2927 bool install_entity (tree decl);
2928 tree tpl_parms (unsigned &tpl_levels);
2929 bool tpl_parms_fini (tree decl, unsigned tpl_levels);
2930 bool tpl_header (tree decl, unsigned *tpl_levels);
2931 int fn_parms_init (tree);
2932 void fn_parms_fini (int tag, tree fn, tree existing, bool has_defn);
2933 unsigned add_indirect_tpl_parms (tree);
2934public:
2935 bool add_indirects (tree);
2936
2937public:
2938 /* Serialize various definitions. */
2939 bool read_definition (tree decl);
2940
2941private:
2942 bool is_matching_decl (tree existing, tree decl, bool is_typedef);
2943 static bool install_implicit_member (tree decl);
2944 bool read_function_def (tree decl, tree maybe_template);
2945 bool read_var_def (tree decl, tree maybe_template);
2946 bool read_class_def (tree decl, tree maybe_template);
2947 bool read_enum_def (tree decl, tree maybe_template);
2948
2949public:
2950 tree decl_container ();
2951 tree key_mergeable (int tag, merge_kind, tree decl, tree inner, tree type,
2952 tree container, bool is_attached);
2953 unsigned binfo_mergeable (tree *);
2954
2955private:
2956 tree key_local_type (const merge_key&, tree, tree);
2957 uintptr_t *find_duplicate (tree existing);
2958 void register_duplicate (tree decl, tree existing);
2959 /* Mark as an already diagnosed bad duplicate. */
2960 void unmatched_duplicate (tree existing)
2961 {
2962 *find_duplicate (existing) |= 1;
2963 }
2964
2965public:
2966 bool is_duplicate (tree decl)
2967 {
2968 return find_duplicate (existing: decl) != NULL;
2969 }
2970 tree maybe_duplicate (tree decl)
2971 {
2972 if (uintptr_t *dup = find_duplicate (existing: decl))
2973 return reinterpret_cast<tree> (*dup & ~uintptr_t (1));
2974 return decl;
2975 }
2976 tree odr_duplicate (tree decl, bool has_defn);
2977
2978public:
2979 /* Return the decls to postprocess. */
2980 const vec<post_process_data>& post_process ()
2981 {
2982 return post_decls;
2983 }
2984private:
2985 /* Register DATA for postprocessing. */
2986 void post_process (post_process_data data)
2987 {
2988 post_decls.safe_push (obj: data);
2989 }
2990
2991private:
2992 void assert_definition (tree, bool installing);
2993};
2994} // anon namespace
2995
2996trees_in::trees_in (module_state *state)
2997 :parent (), state (state), unused (0)
2998{
2999 duplicates = NULL;
3000 back_refs.create (nelems: 500);
3001 post_decls.create (nelems: 0);
3002}
3003
3004trees_in::~trees_in ()
3005{
3006 delete (duplicates);
3007 back_refs.release ();
3008 post_decls.release ();
3009}
3010
3011/* Tree stream writer. */
3012namespace {
3013class trees_out : public bytes_out {
3014 typedef bytes_out parent;
3015
3016private:
3017 module_state *state; /* The module we are writing. */
3018 ptr_int_hash_map tree_map; /* Trees to references */
3019 depset::hash *dep_hash; /* Dependency table. */
3020 int ref_num; /* Back reference number. */
3021 unsigned section;
3022#if CHECKING_P
3023 int importedness; /* Checker that imports not occurring
3024 inappropriately. +ve imports ok,
3025 -ve imports not ok. */
3026#endif
3027
3028public:
3029 trees_out (allocator *, module_state *, depset::hash &deps, unsigned sec = 0);
3030 ~trees_out ();
3031
3032private:
3033 void mark_trees ();
3034 void unmark_trees ();
3035
3036public:
3037 /* Hey, let's ignore the well known STL iterator idiom. */
3038 void begin ();
3039 unsigned end (elf_out *sink, unsigned name, unsigned *crc_ptr);
3040 void end ();
3041
3042public:
3043 enum tags
3044 {
3045 tag_backref = -1, /* Upper bound on the backrefs. */
3046 tag_value = 0, /* Write by value. */
3047 tag_fixed /* Lower bound on the fixed trees. */
3048 };
3049
3050public:
3051 bool is_key_order () const
3052 {
3053 return dep_hash->is_key_order ();
3054 }
3055
3056public:
3057 int insert (tree, walk_kind = WK_normal);
3058
3059private:
3060 void start (tree, bool = false);
3061
3062private:
3063 walk_kind ref_node (tree);
3064public:
3065 int get_tag (tree);
3066 void set_importing (int i ATTRIBUTE_UNUSED)
3067 {
3068#if CHECKING_P
3069 importedness = i;
3070#endif
3071 }
3072
3073private:
3074 void core_bools (tree, bits_out&);
3075 void core_vals (tree);
3076 void lang_type_bools (tree, bits_out&);
3077 void lang_type_vals (tree);
3078 void lang_decl_bools (tree, bits_out&);
3079 void lang_decl_vals (tree);
3080 void lang_vals (tree);
3081 void tree_node_bools (tree);
3082 void tree_node_vals (tree);
3083
3084private:
3085 void chained_decls (tree);
3086 void vec_chained_decls (tree);
3087 void tree_vec (vec<tree, va_gc> *);
3088 void tree_pair_vec (vec<tree_pair_s, va_gc> *);
3089 void tree_list (tree, bool has_purpose);
3090
3091public:
3092 /* Mark a node for by-value walking. */
3093 void mark_by_value (tree);
3094
3095public:
3096 void tree_node (tree);
3097
3098private:
3099 void install_entity (tree decl, depset *);
3100 void tpl_parms (tree parms, unsigned &tpl_levels);
3101 void tpl_parms_fini (tree decl, unsigned tpl_levels);
3102 void fn_parms_fini (tree) {}
3103 unsigned add_indirect_tpl_parms (tree);
3104public:
3105 void add_indirects (tree);
3106 void fn_parms_init (tree);
3107 void tpl_header (tree decl, unsigned *tpl_levels);
3108
3109public:
3110 merge_kind get_merge_kind (tree decl, depset *maybe_dep);
3111 tree decl_container (tree decl);
3112 void key_mergeable (int tag, merge_kind, tree decl, tree inner,
3113 tree container, depset *maybe_dep);
3114 void binfo_mergeable (tree binfo);
3115
3116private:
3117 void key_local_type (merge_key&, tree, tree);
3118 bool decl_node (tree, walk_kind ref);
3119 void type_node (tree);
3120 void tree_value (tree);
3121 void tpl_parm_value (tree);
3122
3123public:
3124 void decl_value (tree, depset *);
3125
3126public:
3127 /* Serialize various definitions. */
3128 void write_definition (tree decl);
3129 void mark_declaration (tree decl, bool do_defn);
3130
3131private:
3132 void mark_function_def (tree decl);
3133 void mark_var_def (tree decl);
3134 void mark_class_def (tree decl);
3135 void mark_enum_def (tree decl);
3136 void mark_class_member (tree decl, bool do_defn = true);
3137 void mark_binfos (tree type);
3138
3139private:
3140 void write_var_def (tree decl);
3141 void write_function_def (tree decl);
3142 void write_class_def (tree decl);
3143 void write_enum_def (tree decl);
3144
3145private:
3146 static void assert_definition (tree);
3147
3148public:
3149 static void instrument ();
3150
3151private:
3152 /* Tree instrumentation. */
3153 static unsigned tree_val_count;
3154 static unsigned decl_val_count;
3155 static unsigned back_ref_count;
3156 static unsigned null_count;
3157};
3158} // anon namespace
3159
3160/* Instrumentation counters. */
3161unsigned trees_out::tree_val_count;
3162unsigned trees_out::decl_val_count;
3163unsigned trees_out::back_ref_count;
3164unsigned trees_out::null_count;
3165
3166trees_out::trees_out (allocator *mem, module_state *state, depset::hash &deps,
3167 unsigned section)
3168 :parent (mem), state (state), tree_map (500),
3169 dep_hash (&deps), ref_num (0), section (section)
3170{
3171#if CHECKING_P
3172 importedness = 0;
3173#endif
3174}
3175
3176trees_out::~trees_out ()
3177{
3178}
3179
3180/********************************************************************/
3181/* Location. We're aware of the line-map concept and reproduce it
3182 here. Each imported module allocates a contiguous span of ordinary
3183 maps, and of macro maps. adhoc maps are serialized by contents,
3184 not pre-allocated. The scattered linemaps of a module are
3185 coalesced when writing. */
3186
3187
3188/* I use half-open [first,second) ranges. */
3189typedef std::pair<unsigned,unsigned> range_t;
3190
3191/* A range of locations. */
3192typedef std::pair<location_t,location_t> loc_range_t;
3193
3194/* Spans of the line maps that are occupied by this TU. I.e. not
3195 within imports. Only extended when in an interface unit.
3196 Interval zero corresponds to the forced header linemap(s). This
3197 is a singleton object. */
3198
3199class loc_spans {
3200public:
3201 /* An interval of line maps. The line maps here represent a contiguous
3202 non-imported range. */
3203 struct span {
3204 loc_range_t ordinary; /* Ordinary map location range. */
3205 loc_range_t macro; /* Macro map location range. */
3206 int ordinary_delta; /* Add to ordinary loc to get serialized loc. */
3207 int macro_delta; /* Likewise for macro loc. */
3208 };
3209
3210private:
3211 vec<span> *spans;
3212
3213public:
3214 loc_spans ()
3215 /* Do not preallocate spans, as that causes
3216 --enable-detailed-mem-stats problems. */
3217 : spans (nullptr)
3218 {
3219 }
3220 ~loc_spans ()
3221 {
3222 delete spans;
3223 }
3224
3225public:
3226 span &operator[] (unsigned ix)
3227 {
3228 return (*spans)[ix];
3229 }
3230 unsigned length () const
3231 {
3232 return spans->length ();
3233 }
3234
3235public:
3236 bool init_p () const
3237 {
3238 return spans != nullptr;
3239 }
3240 /* Initializer. */
3241 void init (const line_maps *lmaps, const line_map_ordinary *map);
3242
3243 /* Slightly skewed preprocessed files can cause us to miss an
3244 initialization in some places. Fallback initializer. */
3245 void maybe_init ()
3246 {
3247 if (!init_p ())
3248 init (lmaps: line_table, map: nullptr);
3249 }
3250
3251public:
3252 enum {
3253 SPAN_RESERVED = 0, /* Reserved (fixed) locations. */
3254 SPAN_FIRST = 1, /* LWM of locations to stream */
3255 SPAN_MAIN = 2 /* Main file and onwards. */
3256 };
3257
3258public:
3259 location_t main_start () const
3260 {
3261 return (*spans)[SPAN_MAIN].ordinary.first;
3262 }
3263
3264public:
3265 void open (location_t);
3266 void close ();
3267
3268public:
3269 /* Propagate imported linemaps to us, if needed. */
3270 bool maybe_propagate (module_state *import, location_t loc);
3271
3272public:
3273 const span *ordinary (location_t);
3274 const span *macro (location_t);
3275};
3276
3277static loc_spans spans;
3278
3279/* Information about ordinary locations we stream out. */
3280struct ord_loc_info
3281{
3282 const line_map_ordinary *src; // line map we're based on
3283 unsigned offset; // offset to this line
3284 unsigned span; // number of locs we span
3285 unsigned remap; // serialization
3286
3287 static int compare (const void *a_, const void *b_)
3288 {
3289 auto *a = static_cast<const ord_loc_info *> (a_);
3290 auto *b = static_cast<const ord_loc_info *> (b_);
3291
3292 if (a->src != b->src)
3293 return a->src < b->src ? -1 : +1;
3294
3295 // Ensure no overlap
3296 gcc_checking_assert (a->offset + a->span <= b->offset
3297 || b->offset + b->span <= a->offset);
3298
3299 gcc_checking_assert (a->offset != b->offset);
3300 return a->offset < b->offset ? -1 : +1;
3301 }
3302};
3303struct ord_loc_traits
3304{
3305 typedef ord_loc_info value_type;
3306 typedef value_type compare_type;
3307
3308 static const bool empty_zero_p = false;
3309
3310 static hashval_t hash (const value_type &v)
3311 {
3312 auto h = pointer_hash<const line_map_ordinary>::hash (candidate: v.src);
3313 return iterative_hash_hashval_t (val: v.offset, val2: h);
3314 }
3315 static bool equal (const value_type &v, const compare_type p)
3316 {
3317 return v.src == p.src && v.offset == p.offset;
3318 }
3319
3320 static void mark_empty (value_type &v)
3321 {
3322 v.src = nullptr;
3323 }
3324 static bool is_empty (value_type &v)
3325 {
3326 return !v.src;
3327 }
3328
3329 static bool is_deleted (value_type &) { return false; }
3330 static void mark_deleted (value_type &) { gcc_unreachable (); }
3331
3332 static void remove (value_type &) {}
3333};
3334/* Table keyed by ord_loc_info, used for noting. */
3335static hash_table<ord_loc_traits> *ord_loc_table;
3336/* Sorted vector, used for writing. */
3337static vec<ord_loc_info> *ord_loc_remap;
3338
3339/* Information about macro locations we stream out. */
3340struct macro_loc_info
3341{
3342 const line_map_macro *src; // original expansion
3343 unsigned remap; // serialization
3344
3345 static int compare (const void *a_, const void *b_)
3346 {
3347 auto *a = static_cast<const macro_loc_info *> (a_);
3348 auto *b = static_cast<const macro_loc_info *> (b_);
3349
3350 gcc_checking_assert (MAP_START_LOCATION (a->src)
3351 != MAP_START_LOCATION (b->src));
3352 if (MAP_START_LOCATION (map: a->src) < MAP_START_LOCATION (map: b->src))
3353 return -1;
3354 else
3355 return +1;
3356 }
3357};
3358struct macro_loc_traits
3359{
3360 typedef macro_loc_info value_type;
3361 typedef const line_map_macro *compare_type;
3362
3363 static const bool empty_zero_p = false;
3364
3365 static hashval_t hash (compare_type p)
3366 {
3367 return pointer_hash<const line_map_macro>::hash (candidate: p);
3368 }
3369 static hashval_t hash (const value_type &v)
3370 {
3371 return hash (p: v.src);
3372 }
3373 static bool equal (const value_type &v, const compare_type p)
3374 {
3375 return v.src == p;
3376 }
3377
3378 static void mark_empty (value_type &v)
3379 {
3380 v.src = nullptr;
3381 }
3382 static bool is_empty (value_type &v)
3383 {
3384 return !v.src;
3385 }
3386
3387 static bool is_deleted (value_type &) { return false; }
3388 static void mark_deleted (value_type &) { gcc_unreachable (); }
3389
3390 static void remove (value_type &) {}
3391};
3392/* Table keyed by line_map_macro, used for noting. */
3393static hash_table<macro_loc_traits> *macro_loc_table;
3394/* Sorted vector, used for writing. */
3395static vec<macro_loc_info> *macro_loc_remap;
3396
3397/* Indirection to allow bsearching imports by ordinary location. */
3398static vec<module_state *> *ool;
3399
3400/********************************************************************/
3401/* Data needed by a module during the process of loading. */
3402struct GTY(()) slurping {
3403
3404 /* Remap import's module numbering to our numbering. Values are
3405 shifted by 1. Bit0 encodes if the import is direct. */
3406 vec<unsigned, va_heap, vl_embed> *
3407 GTY((skip)) remap; /* Module owner remapping. */
3408
3409 elf_in *GTY((skip)) from; /* The elf loader. */
3410
3411 /* This map is only for header imports themselves -- the global
3412 headers bitmap hold it for the current TU. */
3413 bitmap headers; /* Transitive set of direct imports, including
3414 self. Used for macro visibility and
3415 priority. */
3416
3417 /* These objects point into the mmapped area, unless we're not doing
3418 that, or we got frozen or closed. In those cases they point to
3419 buffers we own. */
3420 bytes_in macro_defs; /* Macro definitions. */
3421 bytes_in macro_tbl; /* Macro table. */
3422
3423 /* Location remapping. first->ordinary, second->macro. */
3424 range_t GTY((skip)) loc_deltas;
3425
3426 unsigned current; /* Section currently being loaded. */
3427 unsigned remaining; /* Number of lazy sections yet to read. */
3428 unsigned lru; /* An LRU counter. */
3429
3430 public:
3431 slurping (elf_in *);
3432 ~slurping ();
3433
3434 public:
3435 /* Close the ELF file, if it's open. */
3436 void close ()
3437 {
3438 if (from)
3439 {
3440 from->end ();
3441 delete from;
3442 from = NULL;
3443 }
3444 }
3445
3446 public:
3447 void release_macros ();
3448
3449 public:
3450 void alloc_remap (unsigned size)
3451 {
3452 gcc_assert (!remap);
3453 vec_safe_reserve (v&: remap, nelems: size);
3454 for (unsigned ix = size; ix--;)
3455 remap->quick_push (obj: 0);
3456 }
3457 unsigned remap_module (unsigned owner)
3458 {
3459 if (owner < remap->length ())
3460 return (*remap)[owner] >> 1;
3461 return 0;
3462 }
3463
3464 public:
3465 /* GC allocation. But we must explicitly delete it. */
3466 static void *operator new (size_t x)
3467 {
3468 return ggc_alloc_atomic (s: x);
3469 }
3470 static void operator delete (void *p)
3471 {
3472 ggc_free (p);
3473 }
3474};
3475
3476slurping::slurping (elf_in *from)
3477 : remap (NULL), from (from),
3478 headers (BITMAP_GGC_ALLOC ()), macro_defs (), macro_tbl (),
3479 loc_deltas (0, 0),
3480 current (~0u), remaining (0), lru (0)
3481{
3482}
3483
3484slurping::~slurping ()
3485{
3486 vec_free (v&: remap);
3487 remap = NULL;
3488 release_macros ();
3489 close ();
3490}
3491
3492void slurping::release_macros ()
3493{
3494 if (macro_defs.size)
3495 elf_in::release (self: from, bytes&: macro_defs);
3496 if (macro_tbl.size)
3497 elf_in::release (self: from, bytes&: macro_tbl);
3498}
3499
3500/* Flags for extensions that end up being streamed. */
3501
3502enum streamed_extensions {
3503 SE_OPENMP = 1 << 0,
3504 SE_BITS = 1
3505};
3506
3507/* Counter indices. */
3508enum module_state_counts
3509{
3510 MSC_sec_lwm,
3511 MSC_sec_hwm,
3512 MSC_pendings,
3513 MSC_entities,
3514 MSC_namespaces,
3515 MSC_bindings,
3516 MSC_macros,
3517 MSC_inits,
3518 MSC_HWM
3519};
3520
3521/********************************************************************/
3522struct module_state_config;
3523
3524/* Increasing levels of loadedness. */
3525enum module_loadedness {
3526 ML_NONE, /* Not loaded. */
3527 ML_CONFIG, /* Config loaed. */
3528 ML_PREPROCESSOR, /* Preprocessor loaded. */
3529 ML_LANGUAGE, /* Language loaded. */
3530};
3531
3532/* Increasing levels of directness (toplevel) of import. */
3533enum module_directness {
3534 MD_NONE, /* Not direct. */
3535 MD_PARTITION_DIRECT, /* Direct import of a partition. */
3536 MD_DIRECT, /* Direct import. */
3537 MD_PURVIEW_DIRECT, /* direct import in purview. */
3538};
3539
3540/* State of a particular module. */
3541
3542class GTY((chain_next ("%h.parent"), for_user)) module_state {
3543 public:
3544 /* We always import & export ourselves. */
3545 bitmap imports; /* Transitive modules we're importing. */
3546 bitmap exports; /* Subset of that, that we're exporting. */
3547
3548 module_state *parent;
3549 tree name; /* Name of the module. */
3550
3551 slurping *slurp; /* Data for loading. */
3552
3553 const char *flatname; /* Flatname of module. */
3554 char *filename; /* CMI Filename */
3555
3556 /* Indices into the entity_ary. */
3557 unsigned entity_lwm;
3558 unsigned entity_num;
3559
3560 /* Location ranges for this module. adhoc-locs are decomposed, so
3561 don't have a range. */
3562 loc_range_t GTY((skip)) ordinary_locs;
3563 loc_range_t GTY((skip)) macro_locs; // [lwm,num)
3564
3565 /* LOC is first set too the importing location. When initially
3566 loaded it refers to a module loc whose parent is the importing
3567 location. */
3568 location_t loc; /* Location referring to module itself. */
3569 unsigned crc; /* CRC we saw reading it in. */
3570
3571 unsigned mod; /* Module owner number. */
3572 unsigned remap; /* Remapping during writing. */
3573
3574 unsigned short subst; /* Mangle subst if !0. */
3575
3576 /* How loaded this module is. */
3577 enum module_loadedness loadedness : 2;
3578
3579 bool module_p : 1; /* /The/ module of this TU. */
3580 bool header_p : 1; /* Is a header unit. */
3581 bool interface_p : 1; /* An interface. */
3582 bool partition_p : 1; /* A partition. */
3583
3584 /* How directly this module is imported. */
3585 enum module_directness directness : 2;
3586
3587 bool exported_p : 1; /* directness != MD_NONE && exported. */
3588 bool cmi_noted_p : 1; /* We've told the user about the CMI, don't
3589 do it again */
3590 bool active_init_p : 1; /* This module's global initializer needs
3591 calling. */
3592 bool inform_cmi_p : 1; /* Inform of a read/write. */
3593 bool visited_p : 1; /* A walk-once flag. */
3594 /* Record extensions emitted or permitted. */
3595 unsigned extensions : SE_BITS;
3596 /* 14 bits used, 2 bits remain */
3597
3598 public:
3599 module_state (tree name, module_state *, bool);
3600 ~module_state ();
3601
3602 public:
3603 void release ()
3604 {
3605 imports = exports = NULL;
3606 slurped ();
3607 }
3608 void slurped ()
3609 {
3610 delete slurp;
3611 slurp = NULL;
3612 }
3613 elf_in *from () const
3614 {
3615 return slurp->from;
3616 }
3617
3618 public:
3619 /* Kind of this module. */
3620 bool is_module () const
3621 {
3622 return module_p;
3623 }
3624 bool is_header () const
3625 {
3626 return header_p;
3627 }
3628 bool is_interface () const
3629 {
3630 return interface_p;
3631 }
3632 bool is_partition () const
3633 {
3634 return partition_p;
3635 }
3636
3637 /* How this module is used in the current TU. */
3638 bool is_exported () const
3639 {
3640 return exported_p;
3641 }
3642 bool is_direct () const
3643 {
3644 return directness >= MD_DIRECT;
3645 }
3646 bool is_purview_direct () const
3647 {
3648 return directness == MD_PURVIEW_DIRECT;
3649 }
3650 bool is_partition_direct () const
3651 {
3652 return directness == MD_PARTITION_DIRECT;
3653 }
3654
3655 public:
3656 /* Is this a real module? */
3657 bool has_location () const
3658 {
3659 return loc != UNKNOWN_LOCATION;
3660 }
3661
3662 public:
3663 bool check_not_purview (location_t loc);
3664
3665 public:
3666 void mangle (bool include_partition);
3667
3668 public:
3669 void set_import (module_state const *, bool is_export);
3670 void announce (const char *) const;
3671
3672 public:
3673 /* Read and write module. */
3674 void write_begin (elf_out *to, cpp_reader *,
3675 module_state_config &, unsigned &crc);
3676 void write_end (elf_out *to, cpp_reader *,
3677 module_state_config &, unsigned &crc);
3678 bool read_initial (cpp_reader *);
3679 bool read_preprocessor (bool);
3680 bool read_language (bool);
3681
3682 public:
3683 /* Read a section. */
3684 bool load_section (unsigned snum, binding_slot *mslot);
3685 /* Lazily read a section. */
3686 bool lazy_load (unsigned index, binding_slot *mslot);
3687
3688 public:
3689 /* Juggle a limited number of file numbers. */
3690 static void freeze_an_elf ();
3691 bool maybe_defrost ();
3692
3693 public:
3694 void maybe_completed_reading ();
3695 bool check_read (bool outermost, bool ok);
3696
3697 private:
3698 /* The README, for human consumption. */
3699 void write_readme (elf_out *to, cpp_reader *, const char *dialect);
3700 void write_env (elf_out *to);
3701
3702 private:
3703 /* Import tables. */
3704 void write_imports (bytes_out &cfg, bool direct);
3705 unsigned read_imports (bytes_in &cfg, cpp_reader *, line_maps *maps);
3706
3707 private:
3708 void write_imports (elf_out *to, unsigned *crc_ptr);
3709 bool read_imports (cpp_reader *, line_maps *);
3710
3711 private:
3712 void write_partitions (elf_out *to, unsigned, unsigned *crc_ptr);
3713 bool read_partitions (unsigned);
3714
3715 private:
3716 void write_config (elf_out *to, struct module_state_config &, unsigned crc);
3717 bool read_config (struct module_state_config &);
3718 static void write_counts (elf_out *to, unsigned [MSC_HWM], unsigned *crc_ptr);
3719 bool read_counts (unsigned *);
3720
3721 public:
3722 void note_cmi_name ();
3723
3724 private:
3725 static unsigned write_bindings (elf_out *to, vec<depset *> depsets,
3726 unsigned *crc_ptr);
3727 bool read_bindings (unsigned count, unsigned lwm, unsigned hwm);
3728
3729 static void write_namespace (bytes_out &sec, depset *ns_dep);
3730 tree read_namespace (bytes_in &sec);
3731
3732 void write_namespaces (elf_out *to, vec<depset *> spaces,
3733 unsigned, unsigned *crc_ptr);
3734 bool read_namespaces (unsigned);
3735
3736 void intercluster_seed (trees_out &sec, unsigned index, depset *dep);
3737 unsigned write_cluster (elf_out *to, depset *depsets[], unsigned size,
3738 depset::hash &, unsigned *counts, unsigned *crc_ptr);
3739 bool read_cluster (unsigned snum);
3740
3741 private:
3742 unsigned write_inits (elf_out *to, depset::hash &, unsigned *crc_ptr);
3743 bool read_inits (unsigned count);
3744
3745 private:
3746 unsigned write_pendings (elf_out *to, vec<depset *> depsets,
3747 depset::hash &, unsigned *crc_ptr);
3748 bool read_pendings (unsigned count);
3749
3750 private:
3751 void write_entities (elf_out *to, vec<depset *> depsets,
3752 unsigned count, unsigned *crc_ptr);
3753 bool read_entities (unsigned count, unsigned lwm, unsigned hwm);
3754
3755 private:
3756 void write_init_maps ();
3757 range_t write_prepare_maps (module_state_config *, bool);
3758 bool read_prepare_maps (const module_state_config *);
3759
3760 void write_ordinary_maps (elf_out *to, range_t &,
3761 bool, unsigned *crc_ptr);
3762 bool read_ordinary_maps (unsigned, unsigned);
3763 void write_macro_maps (elf_out *to, range_t &, unsigned *crc_ptr);
3764 bool read_macro_maps (unsigned);
3765
3766 private:
3767 void write_define (bytes_out &, const cpp_macro *);
3768 cpp_macro *read_define (bytes_in &, cpp_reader *) const;
3769 vec<cpp_hashnode *> *prepare_macros (cpp_reader *);
3770 unsigned write_macros (elf_out *to, vec<cpp_hashnode *> *, unsigned *crc_ptr);
3771 bool read_macros ();
3772 void install_macros ();
3773
3774 public:
3775 void import_macros ();
3776
3777 public:
3778 static void undef_macro (cpp_reader *, location_t, cpp_hashnode *);
3779 static cpp_macro *deferred_macro (cpp_reader *, location_t, cpp_hashnode *);
3780
3781 public:
3782 static bool note_location (location_t);
3783 static void write_location (bytes_out &, location_t);
3784 location_t read_location (bytes_in &) const;
3785
3786 public:
3787 void set_flatname ();
3788 const char *get_flatname () const
3789 {
3790 return flatname;
3791 }
3792 location_t imported_from () const;
3793
3794 public:
3795 void set_filename (const Cody::Packet &);
3796 bool do_import (cpp_reader *, bool outermost);
3797};
3798
3799/* Hash module state by name. This cannot be a member of
3800 module_state, because of GTY restrictions. We never delete from
3801 the hash table, but ggc_ptr_hash doesn't support that
3802 simplification. */
3803
3804struct module_state_hash : ggc_ptr_hash<module_state> {
3805 typedef std::pair<tree,uintptr_t> compare_type; /* {name,parent} */
3806
3807 static inline hashval_t hash (const value_type m);
3808 static inline hashval_t hash (const compare_type &n);
3809 static inline bool equal (const value_type existing,
3810 const compare_type &candidate);
3811};
3812
3813module_state::module_state (tree name, module_state *parent, bool partition)
3814 : imports (BITMAP_GGC_ALLOC ()), exports (BITMAP_GGC_ALLOC ()),
3815 parent (parent), name (name), slurp (NULL),
3816 flatname (NULL), filename (NULL),
3817 entity_lwm (~0u >> 1), entity_num (0),
3818 ordinary_locs (0, 0), macro_locs (0, 0),
3819 loc (UNKNOWN_LOCATION),
3820 crc (0), mod (MODULE_UNKNOWN), remap (0), subst (0)
3821{
3822 loadedness = ML_NONE;
3823
3824 module_p = header_p = interface_p = partition_p = false;
3825
3826 directness = MD_NONE;
3827 exported_p = false;
3828
3829 cmi_noted_p = false;
3830 active_init_p = false;
3831
3832 partition_p = partition;
3833
3834 inform_cmi_p = false;
3835 visited_p = false;
3836
3837 extensions = 0;
3838 if (name && TREE_CODE (name) == STRING_CST)
3839 {
3840 header_p = true;
3841
3842 const char *string = TREE_STRING_POINTER (name);
3843 gcc_checking_assert (string[0] == '.'
3844 ? IS_DIR_SEPARATOR (string[1])
3845 : IS_ABSOLUTE_PATH (string));
3846 }
3847
3848 gcc_checking_assert (!(parent && header_p));
3849}
3850
3851module_state::~module_state ()
3852{
3853 release ();
3854}
3855
3856/* Hash module state. */
3857static hashval_t
3858module_name_hash (const_tree name)
3859{
3860 if (TREE_CODE (name) == STRING_CST)
3861 return htab_hash_string (TREE_STRING_POINTER (name));
3862 else
3863 return IDENTIFIER_HASH_VALUE (name);
3864}
3865
3866hashval_t
3867module_state_hash::hash (const value_type m)
3868{
3869 hashval_t ph = pointer_hash<void>::hash
3870 (candidate: reinterpret_cast<void *> (reinterpret_cast<uintptr_t> (m->parent)
3871 | m->is_partition ()));
3872 hashval_t nh = module_name_hash (name: m->name);
3873 return iterative_hash_hashval_t (val: ph, val2: nh);
3874}
3875
3876/* Hash a name. */
3877hashval_t
3878module_state_hash::hash (const compare_type &c)
3879{
3880 hashval_t ph = pointer_hash<void>::hash (candidate: reinterpret_cast<void *> (c.second));
3881 hashval_t nh = module_name_hash (name: c.first);
3882
3883 return iterative_hash_hashval_t (val: ph, val2: nh);
3884}
3885
3886bool
3887module_state_hash::equal (const value_type existing,
3888 const compare_type &candidate)
3889{
3890 uintptr_t ep = (reinterpret_cast<uintptr_t> (existing->parent)
3891 | existing->is_partition ());
3892 if (ep != candidate.second)
3893 return false;
3894
3895 /* Identifier comparison is by pointer. If the string_csts happen
3896 to be the same object, then they're equal too. */
3897 if (existing->name == candidate.first)
3898 return true;
3899
3900 /* If neither are string csts, they can't be equal. */
3901 if (TREE_CODE (candidate.first) != STRING_CST
3902 || TREE_CODE (existing->name) != STRING_CST)
3903 return false;
3904
3905 /* String equality. */
3906 if (TREE_STRING_LENGTH (existing->name)
3907 == TREE_STRING_LENGTH (candidate.first)
3908 && !memcmp (TREE_STRING_POINTER (existing->name),
3909 TREE_STRING_POINTER (candidate.first),
3910 TREE_STRING_LENGTH (existing->name)))
3911 return true;
3912
3913 return false;
3914}
3915
3916/********************************************************************/
3917/* Global state */
3918
3919/* Mapper name. */
3920static const char *module_mapper_name;
3921
3922/* Deferred import queue (FIFO). */
3923static vec<module_state *, va_heap, vl_embed> *pending_imports;
3924
3925/* CMI repository path and workspace. */
3926static char *cmi_repo;
3927static size_t cmi_repo_length;
3928static char *cmi_path;
3929static size_t cmi_path_alloc;
3930
3931/* Count of available and loaded clusters. */
3932static unsigned available_clusters;
3933static unsigned loaded_clusters;
3934
3935/* What the current TU is. */
3936unsigned module_kind;
3937
3938/* Global trees. */
3939static const std::pair<tree *, unsigned> global_tree_arys[] =
3940 {
3941 std::pair<tree *, unsigned> (sizetype_tab, stk_type_kind_last),
3942 std::pair<tree *, unsigned> (integer_types, itk_none),
3943 std::pair<tree *, unsigned> (global_trees, TI_MODULE_HWM),
3944 std::pair<tree *, unsigned> (c_global_trees, CTI_MODULE_HWM),
3945 std::pair<tree *, unsigned> (cp_global_trees, CPTI_MODULE_HWM),
3946 std::pair<tree *, unsigned> (NULL, 0)
3947 };
3948static GTY(()) vec<tree, va_gc> *fixed_trees;
3949static unsigned global_crc;
3950
3951/* Lazy loading can open many files concurrently, there are
3952 per-process limits on that. We pay attention to the process limit,
3953 and attempt to increase it when we run out. Otherwise we use an
3954 LRU scheme to figure out who to flush. Note that if the import
3955 graph /depth/ exceeds lazy_limit, we'll exceed the limit. */
3956static unsigned lazy_lru; /* LRU counter. */
3957static unsigned lazy_open; /* Number of open modules */
3958static unsigned lazy_limit; /* Current limit of open modules. */
3959static unsigned lazy_hard_limit; /* Hard limit on open modules. */
3960/* Account for source, assembler and dump files & directory searches.
3961 We don't keep the source file's open, so we don't have to account
3962 for #include depth. I think dump files are opened and closed per
3963 pass, but ICBW. */
3964#define LAZY_HEADROOM 15 /* File descriptor headroom. */
3965
3966/* Vector of module state. Indexed by OWNER. Has at least 2 slots. */
3967static GTY(()) vec<module_state *, va_gc> *modules;
3968
3969/* Hash of module state, findable by {name, parent}. */
3970static GTY(()) hash_table<module_state_hash> *modules_hash;
3971
3972/* Map of imported entities. We map DECL_UID to index of entity
3973 vector. */
3974typedef hash_map<unsigned/*UID*/, unsigned/*index*/,
3975 simple_hashmap_traits<int_hash<unsigned,0>, unsigned>
3976 > entity_map_t;
3977static entity_map_t *entity_map;
3978/* Doesn't need GTYing, because any tree referenced here is also
3979 findable by, symbol table, specialization table, return type of
3980 reachable function. */
3981static vec<binding_slot, va_heap, vl_embed> *entity_ary;
3982
3983/* Members entities of imported classes that are defined in this TU.
3984 These are where the entity's context is not from the current TU.
3985 We need to emit the definition (but not the enclosing class).
3986
3987 We could find these by walking ALL the imported classes that we
3988 could provide a member definition. But that's expensive,
3989 especially when you consider lazy implicit member declarations,
3990 which could be ANY imported class. */
3991static GTY(()) vec<tree, va_gc> *class_members;
3992
3993/* The same problem exists for class template partial
3994 specializations. Now that we have constraints, the invariant of
3995 expecting them in the instantiation table no longer holds. One of
3996 the constrained partial specializations will be there, but the
3997 others not so much. It's not even an unconstrained partial
3998 spacialization in the table :( so any partial template declaration
3999 is added to this list too. */
4000static GTY(()) vec<tree, va_gc> *partial_specializations;
4001
4002/********************************************************************/
4003
4004/* Our module mapper (created lazily). */
4005module_client *mapper;
4006
4007static module_client *make_mapper (location_t loc, class mkdeps *deps);
4008inline module_client *get_mapper (location_t loc, class mkdeps *deps)
4009{
4010 auto *res = mapper;
4011 if (!res)
4012 res = make_mapper (loc, deps);
4013 return res;
4014}
4015
4016/********************************************************************/
4017static tree
4018get_clone_target (tree decl)
4019{
4020 tree target;
4021
4022 if (TREE_CODE (decl) == TEMPLATE_DECL)
4023 {
4024 tree res_orig = DECL_CLONED_FUNCTION (DECL_TEMPLATE_RESULT (decl));
4025
4026 target = DECL_TI_TEMPLATE (res_orig);
4027 }
4028 else
4029 target = DECL_CLONED_FUNCTION (decl);
4030
4031 gcc_checking_assert (DECL_MAYBE_IN_CHARGE_CDTOR_P (target));
4032
4033 return target;
4034}
4035
4036/* Like FOR_EACH_CLONE, but will walk cloned templates. */
4037#define FOR_EVERY_CLONE(CLONE, FN) \
4038 if (!DECL_MAYBE_IN_CHARGE_CDTOR_P (FN)); \
4039 else \
4040 for (CLONE = DECL_CHAIN (FN); \
4041 CLONE && DECL_CLONED_FUNCTION_P (CLONE); \
4042 CLONE = DECL_CHAIN (CLONE))
4043
4044/* It'd be nice if USE_TEMPLATE was a field of template_info
4045 (a) it'd solve the enum case dealt with below,
4046 (b) both class templates and decl templates would store this in the
4047 same place
4048 (c) this function wouldn't need the by-ref arg, which is annoying. */
4049
4050static tree
4051node_template_info (tree decl, int &use)
4052{
4053 tree ti = NULL_TREE;
4054 int use_tpl = -1;
4055 if (DECL_IMPLICIT_TYPEDEF_P (decl))
4056 {
4057 tree type = TREE_TYPE (decl);
4058
4059 ti = TYPE_TEMPLATE_INFO (type);
4060 if (ti)
4061 {
4062 if (TYPE_LANG_SPECIFIC (type))
4063 use_tpl = CLASSTYPE_USE_TEMPLATE (type);
4064 else
4065 {
4066 /* An enum, where we don't explicitly encode use_tpl.
4067 If the containing context (a type or a function), is
4068 an ({im,ex}plicit) instantiation, then this is too.
4069 If it's a partial or explicit specialization, then
4070 this is not!. */
4071 tree ctx = CP_DECL_CONTEXT (decl);
4072 if (TYPE_P (ctx))
4073 ctx = TYPE_NAME (ctx);
4074 node_template_info (decl: ctx, use);
4075 use_tpl = use != 2 ? use : 0;
4076 }
4077 }
4078 }
4079 else if (DECL_LANG_SPECIFIC (decl)
4080 && (VAR_P (decl)
4081 || TREE_CODE (decl) == TYPE_DECL
4082 || TREE_CODE (decl) == FUNCTION_DECL
4083 || TREE_CODE (decl) == FIELD_DECL
4084 || TREE_CODE (decl) == CONCEPT_DECL
4085 || TREE_CODE (decl) == TEMPLATE_DECL))
4086 {
4087 use_tpl = DECL_USE_TEMPLATE (decl);
4088 ti = DECL_TEMPLATE_INFO (decl);
4089 }
4090
4091 use = use_tpl;
4092 return ti;
4093}
4094
4095/* Find the index in entity_ary for an imported DECL. It should
4096 always be there, but bugs can cause it to be missing, and that can
4097 crash the crash reporting -- let's not do that! When streaming
4098 out we place entities from this module there too -- with negated
4099 indices. */
4100
4101static unsigned
4102import_entity_index (tree decl, bool null_ok = false)
4103{
4104 if (unsigned *slot = entity_map->get (DECL_UID (decl)))
4105 return *slot;
4106
4107 gcc_checking_assert (null_ok);
4108 return ~(~0u >> 1);
4109}
4110
4111/* Find the module for an imported entity at INDEX in the entity ary.
4112 There must be one. */
4113
4114static module_state *
4115import_entity_module (unsigned index)
4116{
4117 if (index > ~(~0u >> 1))
4118 /* This is an index for an exported entity. */
4119 return (*modules)[0];
4120
4121 /* Do not include the current TU (not an off-by-one error). */
4122 unsigned pos = 1;
4123 unsigned len = modules->length () - pos;
4124 while (len)
4125 {
4126 unsigned half = len / 2;
4127 module_state *probe = (*modules)[pos + half];
4128 if (index < probe->entity_lwm)
4129 len = half;
4130 else if (index < probe->entity_lwm + probe->entity_num)
4131 return probe;
4132 else
4133 {
4134 pos += half + 1;
4135 len = len - (half + 1);
4136 }
4137 }
4138 gcc_unreachable ();
4139}
4140
4141
4142/********************************************************************/
4143/* A dumping machinery. */
4144
4145class dumper {
4146public:
4147 enum {
4148 LOCATION = TDF_LINENO, /* -lineno:Source location streaming. */
4149 DEPEND = TDF_GRAPH, /* -graph:Dependency graph construction. */
4150 CLUSTER = TDF_BLOCKS, /* -blocks:Clusters. */
4151 TREE = TDF_UID, /* -uid:Tree streaming. */
4152 MERGE = TDF_ALIAS, /* -alias:Mergeable Entities. */
4153 ELF = TDF_ASMNAME, /* -asmname:Elf data. */
4154 MACRO = TDF_VOPS /* -vops:Macros. */
4155 };
4156
4157private:
4158 struct impl {
4159 typedef vec<module_state *, va_heap, vl_embed> stack_t;
4160
4161 FILE *stream; /* Dump stream. */
4162 unsigned indent; /* Local indentation. */
4163 bool bol; /* Beginning of line. */
4164 stack_t stack; /* Trailing array of module_state. */
4165
4166 bool nested_name (tree); /* Dump a name following DECL_CONTEXT. */
4167 };
4168
4169public:
4170 /* The dumper. */
4171 impl *dumps;
4172 dump_flags_t flags;
4173
4174public:
4175 /* Push/pop module state dumping. */
4176 unsigned push (module_state *);
4177 void pop (unsigned);
4178
4179public:
4180 /* Change local indentation. */
4181 void indent ()
4182 {
4183 if (dumps)
4184 dumps->indent++;
4185 }
4186 void outdent ()
4187 {
4188 if (dumps)
4189 {
4190 gcc_checking_assert (dumps->indent);
4191 dumps->indent--;
4192 }
4193 }
4194
4195public:
4196 /* Is dump enabled?. */
4197 bool operator () (int mask = 0)
4198 {
4199 if (!dumps || !dumps->stream)
4200 return false;
4201 if (mask && !(mask & flags))
4202 return false;
4203 return true;
4204 }
4205 /* Dump some information. */
4206 bool operator () (const char *, ...);
4207};
4208
4209/* The dumper. */
4210static dumper dump = {.dumps: 0, .flags: dump_flags_t (0)};
4211
4212/* Push to dumping M. Return previous indentation level. */
4213
4214unsigned
4215dumper::push (module_state *m)
4216{
4217 FILE *stream = NULL;
4218 if (!dumps || !dumps->stack.length ())
4219 {
4220 stream = dump_begin (module_dump_id, &flags);
4221 if (!stream)
4222 return 0;
4223 }
4224
4225 if (!dumps || !dumps->stack.space (nelems: 1))
4226 {
4227 /* Create or extend the dump implementor. */
4228 unsigned current = dumps ? dumps->stack.length () : 0;
4229 unsigned count = current ? current * 2 : EXPERIMENT (1, 20);
4230 size_t alloc = (offsetof (impl, stack)
4231 + impl::stack_t::embedded_size (alloc: count));
4232 dumps = XRESIZEVAR (impl, dumps, alloc);
4233 dumps->stack.embedded_init (alloc: count, num: current);
4234 }
4235 if (stream)
4236 dumps->stream = stream;
4237
4238 unsigned n = dumps->indent;
4239 dumps->indent = 0;
4240 dumps->bol = true;
4241 dumps->stack.quick_push (obj: m);
4242 if (m)
4243 {
4244 module_state *from = NULL;
4245
4246 if (dumps->stack.length () > 1)
4247 from = dumps->stack[dumps->stack.length () - 2];
4248 else
4249 dump ("");
4250 dump (from ? "Starting module %M (from %M)"
4251 : "Starting module %M", m, from);
4252 }
4253
4254 return n;
4255}
4256
4257/* Pop from dumping. Restore indentation to N. */
4258
4259void dumper::pop (unsigned n)
4260{
4261 if (!dumps)
4262 return;
4263
4264 gcc_checking_assert (dump () && !dumps->indent);
4265 if (module_state *m = dumps->stack[dumps->stack.length () - 1])
4266 {
4267 module_state *from = (dumps->stack.length () > 1
4268 ? dumps->stack[dumps->stack.length () - 2] : NULL);
4269 dump (from ? "Finishing module %M (returning to %M)"
4270 : "Finishing module %M", m, from);
4271 }
4272 dumps->stack.pop ();
4273 dumps->indent = n;
4274 if (!dumps->stack.length ())
4275 {
4276 dump_end (module_dump_id, dumps->stream);
4277 dumps->stream = NULL;
4278 }
4279}
4280
4281/* Dump a nested name for arbitrary tree T. Sometimes it won't have a
4282 name. */
4283
4284bool
4285dumper::impl::nested_name (tree t)
4286{
4287 tree ti = NULL_TREE;
4288 int origin = -1;
4289 tree name = NULL_TREE;
4290
4291 if (t && TREE_CODE (t) == TREE_BINFO)
4292 t = BINFO_TYPE (t);
4293
4294 if (t && TYPE_P (t))
4295 t = TYPE_NAME (t);
4296
4297 if (t && DECL_P (t))
4298 {
4299 if (t == global_namespace || DECL_TEMPLATE_PARM_P (t))
4300 ;
4301 else if (tree ctx = DECL_CONTEXT (t))
4302 if (TREE_CODE (ctx) == TRANSLATION_UNIT_DECL
4303 || nested_name (t: ctx))
4304 fputs (s: "::", stream: stream);
4305
4306 int use_tpl;
4307 ti = node_template_info (decl: t, use&: use_tpl);
4308 if (ti && TREE_CODE (TI_TEMPLATE (ti)) == TEMPLATE_DECL
4309 && (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == t))
4310 t = TI_TEMPLATE (ti);
4311 tree not_tmpl = t;
4312 if (TREE_CODE (t) == TEMPLATE_DECL)
4313 {
4314 fputs (s: "template ", stream: stream);
4315 not_tmpl = DECL_TEMPLATE_RESULT (t);
4316 }
4317
4318 if (not_tmpl
4319 && DECL_P (not_tmpl)
4320 && DECL_LANG_SPECIFIC (not_tmpl)
4321 && DECL_MODULE_IMPORT_P (not_tmpl))
4322 {
4323 /* We need to be careful here, so as to not explode on
4324 inconsistent data -- we're probably debugging, because
4325 Something Is Wrong. */
4326 unsigned index = import_entity_index (decl: t, null_ok: true);
4327 if (!(index & ~(~0u >> 1)))
4328 origin = import_entity_module (index)->mod;
4329 else if (index > ~(~0u >> 1))
4330 /* An imported partition member that we're emitting. */
4331 origin = 0;
4332 else
4333 origin = -2;
4334 }
4335
4336 name = DECL_NAME (t) ? DECL_NAME (t)
4337 : HAS_DECL_ASSEMBLER_NAME_P (t) ? DECL_ASSEMBLER_NAME_RAW (t)
4338 : NULL_TREE;
4339 }
4340 else
4341 name = t;
4342
4343 if (name)
4344 switch (TREE_CODE (name))
4345 {
4346 default:
4347 fputs (s: "#unnamed#", stream: stream);
4348 break;
4349
4350 case IDENTIFIER_NODE:
4351 fwrite (IDENTIFIER_POINTER (name), size: 1, IDENTIFIER_LENGTH (name), s: stream);
4352 break;
4353
4354 case INTEGER_CST:
4355 print_hex (wi: wi::to_wide (t: name), file: stream);
4356 break;
4357
4358 case STRING_CST:
4359 /* If TREE_TYPE is NULL, this is a raw string. */
4360 fwrite (TREE_STRING_POINTER (name), size: 1,
4361 TREE_STRING_LENGTH (name) - (TREE_TYPE (name) != NULL_TREE),
4362 s: stream);
4363 break;
4364 }
4365 else
4366 fputs (s: "#null#", stream: stream);
4367
4368 if (origin >= 0)
4369 {
4370 const module_state *module = (*modules)[origin];
4371 fprintf (stream: stream, format: "@%s:%d", !module ? "" : !module->name ? "(unnamed)"
4372 : module->get_flatname (), origin);
4373 }
4374 else if (origin == -2)
4375 fprintf (stream: stream, format: "@???");
4376
4377 if (ti)
4378 {
4379 tree args = INNERMOST_TEMPLATE_ARGS (TI_ARGS (ti));
4380 fputs (s: "<", stream: stream);
4381 if (args)
4382 for (int ix = 0; ix != TREE_VEC_LENGTH (args); ix++)
4383 {
4384 if (ix)
4385 fputs (s: ",", stream: stream);
4386 nested_name (TREE_VEC_ELT (args, ix));
4387 }
4388 fputs (s: ">", stream: stream);
4389 }
4390
4391 return true;
4392}
4393
4394/* Formatted dumping. FORMAT begins with '+' do not emit a trailing
4395 new line. (Normally it is appended.)
4396 Escapes:
4397 %C - tree_code
4398 %I - identifier
4399 %M - module_state
4400 %N - name -- DECL_NAME
4401 %P - context:name pair
4402 %R - unsigned:unsigned ratio
4403 %S - symbol -- DECL_ASSEMBLER_NAME
4404 %U - long unsigned
4405 %V - version
4406 --- the following are printf-like, but without its flexibility
4407 %d - decimal int
4408 %p - pointer
4409 %s - string
4410 %u - unsigned int
4411 %x - hex int
4412
4413 We do not implement the printf modifiers. */
4414
4415bool
4416dumper::operator () (const char *format, ...)
4417{
4418 if (!(*this) ())
4419 return false;
4420
4421 bool no_nl = format[0] == '+';
4422 format += no_nl;
4423
4424 if (dumps->bol)
4425 {
4426 /* Module import indent. */
4427 if (unsigned depth = dumps->stack.length () - 1)
4428 {
4429 const char *prefix = ">>>>";
4430 fprintf (stream: dumps->stream, format: (depth <= strlen (s: prefix)
4431 ? &prefix[strlen (s: prefix) - depth]
4432 : ">.%d.>"), depth);
4433 }
4434
4435 /* Local indent. */
4436 if (unsigned indent = dumps->indent)
4437 {
4438 const char *prefix = " ";
4439 fprintf (stream: dumps->stream, format: (indent <= strlen (s: prefix)
4440 ? &prefix[strlen (s: prefix) - indent]
4441 : " .%d. "), indent);
4442 }
4443 dumps->bol = false;
4444 }
4445
4446 va_list args;
4447 va_start (args, format);
4448 while (const char *esc = strchr (s: format, c: '%'))
4449 {
4450 fwrite (ptr: format, size: 1, n: (size_t)(esc - format), s: dumps->stream);
4451 format = ++esc;
4452 switch (*format++)
4453 {
4454 default:
4455 gcc_unreachable ();
4456
4457 case '%':
4458 fputc (c: '%', stream: dumps->stream);
4459 break;
4460
4461 case 'C': /* Code */
4462 {
4463 tree_code code = (tree_code)va_arg (args, unsigned);
4464 fputs (s: get_tree_code_name (code), stream: dumps->stream);
4465 }
4466 break;
4467
4468 case 'I': /* Identifier. */
4469 {
4470 tree t = va_arg (args, tree);
4471 dumps->nested_name (t);
4472 }
4473 break;
4474
4475 case 'M': /* Module. */
4476 {
4477 const char *str = "(none)";
4478 if (module_state *m = va_arg (args, module_state *))
4479 {
4480 if (!m->has_location ())
4481 str = "(detached)";
4482 else
4483 str = m->get_flatname ();
4484 }
4485 fputs (s: str, stream: dumps->stream);
4486 }
4487 break;
4488
4489 case 'N': /* Name. */
4490 {
4491 tree t = va_arg (args, tree);
4492 while (t && TREE_CODE (t) == OVERLOAD)
4493 t = OVL_FUNCTION (t);
4494 fputc (c: '\'', stream: dumps->stream);
4495 dumps->nested_name (t);
4496 fputc (c: '\'', stream: dumps->stream);
4497 }
4498 break;
4499
4500 case 'P': /* Pair. */
4501 {
4502 tree ctx = va_arg (args, tree);
4503 tree name = va_arg (args, tree);
4504 fputc (c: '\'', stream: dumps->stream);
4505 dumps->nested_name (t: ctx);
4506 if (ctx && ctx != global_namespace)
4507 fputs (s: "::", stream: dumps->stream);
4508 dumps->nested_name (t: name);
4509 fputc (c: '\'', stream: dumps->stream);
4510 }
4511 break;
4512
4513 case 'R': /* Ratio */
4514 {
4515 unsigned a = va_arg (args, unsigned);
4516 unsigned b = va_arg (args, unsigned);
4517 fprintf (stream: dumps->stream, format: "%.1f", (float) a / (b + !b));
4518 }
4519 break;
4520
4521 case 'S': /* Symbol name */
4522 {
4523 tree t = va_arg (args, tree);
4524 if (t && TYPE_P (t))
4525 t = TYPE_NAME (t);
4526 if (t && HAS_DECL_ASSEMBLER_NAME_P (t)
4527 && DECL_ASSEMBLER_NAME_SET_P (t))
4528 {
4529 fputc (c: '(', stream: dumps->stream);
4530 fputs (IDENTIFIER_POINTER (DECL_ASSEMBLER_NAME (t)),
4531 stream: dumps->stream);
4532 fputc (c: ')', stream: dumps->stream);
4533 }
4534 }
4535 break;
4536
4537 case 'U': /* long unsigned. */
4538 {
4539 unsigned long u = va_arg (args, unsigned long);
4540 fprintf (stream: dumps->stream, format: "%lu", u);
4541 }
4542 break;
4543
4544 case 'V': /* Verson. */
4545 {
4546 unsigned v = va_arg (args, unsigned);
4547 verstr_t string;
4548
4549 version2string (version: v, out&: string);
4550 fputs (s: string, stream: dumps->stream);
4551 }
4552 break;
4553
4554 case 'c': /* Character. */
4555 {
4556 int c = va_arg (args, int);
4557 fputc (c: c, stream: dumps->stream);
4558 }
4559 break;
4560
4561 case 'd': /* Decimal Int. */
4562 {
4563 int d = va_arg (args, int);
4564 fprintf (stream: dumps->stream, format: "%d", d);
4565 }
4566 break;
4567
4568 case 'p': /* Pointer. */
4569 {
4570 void *p = va_arg (args, void *);
4571 fprintf (stream: dumps->stream, format: "%p", p);
4572 }
4573 break;
4574
4575 case 's': /* String. */
4576 {
4577 const char *s = va_arg (args, char *);
4578 gcc_checking_assert (s);
4579 fputs (s: s, stream: dumps->stream);
4580 }
4581 break;
4582
4583 case 'u': /* Unsigned. */
4584 {
4585 unsigned u = va_arg (args, unsigned);
4586 fprintf (stream: dumps->stream, format: "%u", u);
4587 }
4588 break;
4589
4590 case 'x': /* Hex. */
4591 {
4592 unsigned x = va_arg (args, unsigned);
4593 fprintf (stream: dumps->stream, format: "%x", x);
4594 }
4595 break;
4596 }
4597 }
4598 fputs (s: format, stream: dumps->stream);
4599 va_end (args);
4600 if (!no_nl)
4601 {
4602 dumps->bol = true;
4603 fputc (c: '\n', stream: dumps->stream);
4604 }
4605 return true;
4606}
4607
4608struct note_def_cache_hasher : ggc_cache_ptr_hash<tree_node>
4609{
4610 static int keep_cache_entry (tree t)
4611 {
4612 if (!CHECKING_P)
4613 /* GTY is unfortunately not clever enough to conditionalize
4614 this. */
4615 gcc_unreachable ();
4616
4617 if (ggc_marked_p (t))
4618 return -1;
4619
4620 unsigned n = dump.push (NULL);
4621 /* This might or might not be an error. We should note its
4622 dropping whichever. */
4623 dump () && dump ("Dropping %N from note_defs table", t);
4624 dump.pop (n);
4625
4626 return 0;
4627 }
4628};
4629
4630/* We should stream each definition at most once.
4631 This needs to be a cache because there are cases where a definition
4632 ends up being not retained, and we need to drop those so we don't
4633 get confused if memory is reallocated. */
4634typedef hash_table<note_def_cache_hasher> note_defs_table_t;
4635static GTY((cache)) note_defs_table_t *note_defs;
4636
4637void
4638trees_in::assert_definition (tree decl ATTRIBUTE_UNUSED,
4639 bool installing ATTRIBUTE_UNUSED)
4640{
4641#if CHECKING_P
4642 tree *slot = note_defs->find_slot (value: decl, insert: installing ? INSERT : NO_INSERT);
4643 tree not_tmpl = STRIP_TEMPLATE (decl);
4644 if (installing)
4645 {
4646 /* We must be inserting for the first time. */
4647 gcc_assert (!*slot);
4648 *slot = decl;
4649 }
4650 else
4651 /* If this is not the mergeable entity, it should not be in the
4652 table. If it is a non-global-module mergeable entity, it
4653 should be in the table. Global module entities could have been
4654 defined textually in the current TU and so might or might not
4655 be present. */
4656 gcc_assert (!is_duplicate (decl)
4657 ? !slot
4658 : (slot
4659 || !DECL_LANG_SPECIFIC (not_tmpl)
4660 || !DECL_MODULE_PURVIEW_P (not_tmpl)
4661 || (!DECL_MODULE_IMPORT_P (not_tmpl)
4662 && header_module_p ())));
4663
4664 if (not_tmpl != decl)
4665 gcc_assert (!note_defs->find_slot (not_tmpl, NO_INSERT));
4666#endif
4667}
4668
4669void
4670trees_out::assert_definition (tree decl ATTRIBUTE_UNUSED)
4671{
4672#if CHECKING_P
4673 tree *slot = note_defs->find_slot (value: decl, insert: INSERT);
4674 gcc_assert (!*slot);
4675 *slot = decl;
4676 if (TREE_CODE (decl) == TEMPLATE_DECL)
4677 gcc_assert (!note_defs->find_slot (DECL_TEMPLATE_RESULT (decl), NO_INSERT));
4678#endif
4679}
4680
4681/********************************************************************/
4682static bool
4683noisy_p ()
4684{
4685 if (quiet_flag)
4686 return false;
4687
4688 pp_needs_newline (global_dc->printer) = true;
4689 diagnostic_set_last_function (global_dc, (diagnostic_info *) NULL);
4690
4691 return true;
4692}
4693
4694/* Set the cmi repo. Strip trailing '/', '.' becomes NULL. */
4695
4696static void
4697set_cmi_repo (const char *r)
4698{
4699 XDELETEVEC (cmi_repo);
4700 XDELETEVEC (cmi_path);
4701 cmi_path_alloc = 0;
4702
4703 cmi_repo = NULL;
4704 cmi_repo_length = 0;
4705
4706 if (!r || !r[0])
4707 return;
4708
4709 size_t len = strlen (s: r);
4710 cmi_repo = XNEWVEC (char, len + 1);
4711 memcpy (dest: cmi_repo, src: r, n: len + 1);
4712
4713 if (len > 1 && IS_DIR_SEPARATOR (cmi_repo[len-1]))
4714 len--;
4715 if (len == 1 && cmi_repo[0] == '.')
4716 len--;
4717 cmi_repo[len] = 0;
4718 cmi_repo_length = len;
4719}
4720
4721/* TO is a repo-relative name. Provide one that we may use from where
4722 we are. */
4723
4724static const char *
4725maybe_add_cmi_prefix (const char *to, size_t *len_p = NULL)
4726{
4727 size_t len = len_p || cmi_repo_length ? strlen (s: to) : 0;
4728
4729 if (cmi_repo_length && !IS_ABSOLUTE_PATH (to))
4730 {
4731 if (cmi_path_alloc < cmi_repo_length + len + 2)
4732 {
4733 XDELETEVEC (cmi_path);
4734 cmi_path_alloc = cmi_repo_length + len * 2 + 2;
4735 cmi_path = XNEWVEC (char, cmi_path_alloc);
4736
4737 memcpy (dest: cmi_path, src: cmi_repo, n: cmi_repo_length);
4738 cmi_path[cmi_repo_length] = DIR_SEPARATOR;
4739 }
4740
4741 memcpy (dest: &cmi_path[cmi_repo_length + 1], src: to, n: len + 1);
4742 len += cmi_repo_length + 1;
4743 to = cmi_path;
4744 }
4745
4746 if (len_p)
4747 *len_p = len;
4748
4749 return to;
4750}
4751
4752/* Try and create the directories of PATH. */
4753
4754static void
4755create_dirs (char *path)
4756{
4757 /* Try and create the missing directories. */
4758 for (char *base = path; *base; base++)
4759 if (IS_DIR_SEPARATOR (*base))
4760 {
4761 char sep = *base;
4762 *base = 0;
4763 int failed = mkdir (path: path, S_IRWXU | S_IRWXG | S_IRWXO);
4764 dump () && dump ("Mkdir ('%s') errno:=%u", path, failed ? errno : 0);
4765 *base = sep;
4766 if (failed
4767 /* Maybe racing with another creator (of a *different*
4768 module). */
4769 && errno != EEXIST)
4770 break;
4771 }
4772}
4773
4774/* Given a CLASSTYPE_DECL_LIST VALUE get the template friend decl,
4775 if that's what this is. */
4776
4777static tree
4778friend_from_decl_list (tree frnd)
4779{
4780 tree res = frnd;
4781
4782 if (TREE_CODE (frnd) != TEMPLATE_DECL)
4783 {
4784 tree tmpl = NULL_TREE;
4785 if (TYPE_P (frnd))
4786 {
4787 res = TYPE_NAME (frnd);
4788 if (CLASS_TYPE_P (frnd)
4789 && CLASSTYPE_TEMPLATE_INFO (frnd))
4790 tmpl = CLASSTYPE_TI_TEMPLATE (frnd);
4791 }
4792 else if (DECL_TEMPLATE_INFO (frnd))
4793 {
4794 tmpl = DECL_TI_TEMPLATE (frnd);
4795 if (TREE_CODE (tmpl) != TEMPLATE_DECL)
4796 tmpl = NULL_TREE;
4797 }
4798
4799 if (tmpl && DECL_TEMPLATE_RESULT (tmpl) == res)
4800 res = tmpl;
4801 }
4802
4803 return res;
4804}
4805
4806static tree
4807find_enum_member (tree ctx, tree name)
4808{
4809 for (tree values = TYPE_VALUES (ctx);
4810 values; values = TREE_CHAIN (values))
4811 if (DECL_NAME (TREE_VALUE (values)) == name)
4812 return TREE_VALUE (values);
4813
4814 return NULL_TREE;
4815}
4816
4817/********************************************************************/
4818/* Instrumentation gathered writing bytes. */
4819
4820void
4821bytes_out::instrument ()
4822{
4823 dump ("Wrote %u bytes in %u blocks", lengths[3], spans[3]);
4824 dump ("Wrote %u bits in %u bytes", lengths[0] + lengths[1], lengths[2]);
4825 for (unsigned ix = 0; ix < 2; ix++)
4826 dump (" %u %s spans of %R bits", spans[ix],
4827 ix ? "one" : "zero", lengths[ix], spans[ix]);
4828 dump (" %u blocks with %R bits padding", spans[2],
4829 lengths[2] * 8 - (lengths[0] + lengths[1]), spans[2]);
4830}
4831
4832/* Instrumentation gathered writing trees. */
4833void
4834trees_out::instrument ()
4835{
4836 if (dump (""))
4837 {
4838 bytes_out::instrument ();
4839 dump ("Wrote:");
4840 dump (" %u decl trees", decl_val_count);
4841 dump (" %u other trees", tree_val_count);
4842 dump (" %u back references", back_ref_count);
4843 dump (" %u null trees", null_count);
4844 }
4845}
4846
4847/* Setup and teardown for a tree walk. */
4848
4849void
4850trees_out::begin ()
4851{
4852 gcc_assert (!streaming_p () || !tree_map.elements ());
4853
4854 mark_trees ();
4855 if (streaming_p ())
4856 parent::begin ();
4857}
4858
4859unsigned
4860trees_out::end (elf_out *sink, unsigned name, unsigned *crc_ptr)
4861{
4862 gcc_checking_assert (streaming_p ());
4863
4864 unmark_trees ();
4865 return parent::end (sink, name, crc_ptr);
4866}
4867
4868void
4869trees_out::end ()
4870{
4871 gcc_assert (!streaming_p ());
4872
4873 unmark_trees ();
4874 /* Do not parent::end -- we weren't streaming. */
4875}
4876
4877void
4878trees_out::mark_trees ()
4879{
4880 if (size_t size = tree_map.elements ())
4881 {
4882 /* This isn't our first rodeo, destroy and recreate the
4883 tree_map. I'm a bad bad man. Use the previous size as a
4884 guess for the next one (so not all bad). */
4885 tree_map.~ptr_int_hash_map ();
4886 new (&tree_map) ptr_int_hash_map (size);
4887 }
4888
4889 /* Install the fixed trees, with +ve references. */
4890 unsigned limit = fixed_trees->length ();
4891 for (unsigned ix = 0; ix != limit; ix++)
4892 {
4893 tree val = (*fixed_trees)[ix];
4894 bool existed = tree_map.put (k: val, v: ix + tag_fixed);
4895 gcc_checking_assert (!TREE_VISITED (val) && !existed);
4896 TREE_VISITED (val) = true;
4897 }
4898
4899 ref_num = 0;
4900}
4901
4902/* Unmark the trees we encountered */
4903
4904void
4905trees_out::unmark_trees ()
4906{
4907 ptr_int_hash_map::iterator end (tree_map.end ());
4908 for (ptr_int_hash_map::iterator iter (tree_map.begin ()); iter != end; ++iter)
4909 {
4910 tree node = reinterpret_cast<tree> ((*iter).first);
4911 int ref = (*iter).second;
4912 /* We should have visited the node, and converted its mergeable
4913 reference to a regular reference. */
4914 gcc_checking_assert (TREE_VISITED (node)
4915 && (ref <= tag_backref || ref >= tag_fixed));
4916 TREE_VISITED (node) = false;
4917 }
4918}
4919
4920/* Mark DECL for by-value walking. We do this by inserting it into
4921 the tree map with a reference of zero. May be called multiple
4922 times on the same node. */
4923
4924void
4925trees_out::mark_by_value (tree decl)
4926{
4927 gcc_checking_assert (DECL_P (decl)
4928 /* Enum consts are INTEGER_CSTS. */
4929 || TREE_CODE (decl) == INTEGER_CST
4930 || TREE_CODE (decl) == TREE_BINFO);
4931
4932 if (TREE_VISITED (decl))
4933 /* Must already be forced or fixed. */
4934 gcc_checking_assert (*tree_map.get (decl) >= tag_value);
4935 else
4936 {
4937 bool existed = tree_map.put (k: decl, v: tag_value);
4938 gcc_checking_assert (!existed);
4939 TREE_VISITED (decl) = true;
4940 }
4941}
4942
4943int
4944trees_out::get_tag (tree t)
4945{
4946 gcc_checking_assert (TREE_VISITED (t));
4947 return *tree_map.get (k: t);
4948}
4949
4950/* Insert T into the map, return its tag number. */
4951
4952int
4953trees_out::insert (tree t, walk_kind walk)
4954{
4955 gcc_checking_assert (walk != WK_normal || !TREE_VISITED (t));
4956 int tag = --ref_num;
4957 bool existed;
4958 int &slot = tree_map.get_or_insert (k: t, existed: &existed);
4959 gcc_checking_assert (TREE_VISITED (t) == existed
4960 && (!existed
4961 || (walk == WK_value && slot == tag_value)));
4962 TREE_VISITED (t) = true;
4963 slot = tag;
4964
4965 return tag;
4966}
4967
4968/* Insert T into the backreference array. Return its back reference
4969 number. */
4970
4971int
4972trees_in::insert (tree t)
4973{
4974 gcc_checking_assert (t || get_overrun ());
4975 back_refs.safe_push (obj: t);
4976 return -(int)back_refs.length ();
4977}
4978
4979/* A chained set of decls. */
4980
4981void
4982trees_out::chained_decls (tree decls)
4983{
4984 for (; decls; decls = DECL_CHAIN (decls))
4985 tree_node (decls);
4986 tree_node (NULL_TREE);
4987}
4988
4989tree
4990trees_in::chained_decls ()
4991{
4992 tree decls = NULL_TREE;
4993 for (tree *chain = &decls;;)
4994 if (tree decl = tree_node ())
4995 {
4996 if (!DECL_P (decl) || DECL_CHAIN (decl))
4997 {
4998 set_overrun ();
4999 break;
5000 }
5001 *chain = decl;
5002 chain = &DECL_CHAIN (decl);
5003 }
5004 else
5005 break;
5006
5007 return decls;
5008}
5009
5010/* A vector of decls following DECL_CHAIN. */
5011
5012void
5013trees_out::vec_chained_decls (tree decls)
5014{
5015 if (streaming_p ())
5016 {
5017 unsigned len = 0;
5018
5019 for (tree decl = decls; decl; decl = DECL_CHAIN (decl))
5020 len++;
5021 u (v: len);
5022 }
5023
5024 for (tree decl = decls; decl; decl = DECL_CHAIN (decl))
5025 {
5026 if (DECL_IMPLICIT_TYPEDEF_P (decl)
5027 && TYPE_NAME (TREE_TYPE (decl)) != decl)
5028 /* An anonynmous struct with a typedef name. An odd thing to
5029 write. */
5030 tree_node (NULL_TREE);
5031 else
5032 tree_node (decl);
5033 }
5034}
5035
5036vec<tree, va_heap> *
5037trees_in::vec_chained_decls ()
5038{
5039 vec<tree, va_heap> *v = NULL;
5040
5041 if (unsigned len = u ())
5042 {
5043 vec_alloc (v, nelems: len);
5044
5045 for (unsigned ix = 0; ix < len; ix++)
5046 {
5047 tree decl = tree_node ();
5048 if (decl && !DECL_P (decl))
5049 {
5050 set_overrun ();
5051 break;
5052 }
5053 v->quick_push (obj: decl);
5054 }
5055
5056 if (get_overrun ())
5057 {
5058 vec_free (v);
5059 v = NULL;
5060 }
5061 }
5062
5063 return v;
5064}
5065
5066/* A vector of trees. */
5067
5068void
5069trees_out::tree_vec (vec<tree, va_gc> *v)
5070{
5071 unsigned len = vec_safe_length (v);
5072 if (streaming_p ())
5073 u (v: len);
5074 for (unsigned ix = 0; ix != len; ix++)
5075 tree_node ((*v)[ix]);
5076}
5077
5078vec<tree, va_gc> *
5079trees_in::tree_vec ()
5080{
5081 vec<tree, va_gc> *v = NULL;
5082 if (unsigned len = u ())
5083 {
5084 vec_alloc (v, nelems: len);
5085 for (unsigned ix = 0; ix != len; ix++)
5086 v->quick_push (obj: tree_node ());
5087 }
5088 return v;
5089}
5090
5091/* A vector of tree pairs. */
5092
5093void
5094trees_out::tree_pair_vec (vec<tree_pair_s, va_gc> *v)
5095{
5096 unsigned len = vec_safe_length (v);
5097 if (streaming_p ())
5098 u (v: len);
5099 if (len)
5100 for (unsigned ix = 0; ix != len; ix++)
5101 {
5102 tree_pair_s const &s = (*v)[ix];
5103 tree_node (s.purpose);
5104 tree_node (s.value);
5105 }
5106}
5107
5108vec<tree_pair_s, va_gc> *
5109trees_in::tree_pair_vec ()
5110{
5111 vec<tree_pair_s, va_gc> *v = NULL;
5112 if (unsigned len = u ())
5113 {
5114 vec_alloc (v, nelems: len);
5115 for (unsigned ix = 0; ix != len; ix++)
5116 {
5117 tree_pair_s s;
5118 s.purpose = tree_node ();
5119 s.value = tree_node ();
5120 v->quick_push (obj: s);
5121 }
5122 }
5123 return v;
5124}
5125
5126void
5127trees_out::tree_list (tree list, bool has_purpose)
5128{
5129 for (; list; list = TREE_CHAIN (list))
5130 {
5131 gcc_checking_assert (TREE_VALUE (list));
5132 tree_node (TREE_VALUE (list));
5133 if (has_purpose)
5134 tree_node (TREE_PURPOSE (list));
5135 }
5136 tree_node (NULL_TREE);
5137}
5138
5139tree
5140trees_in::tree_list (bool has_purpose)
5141{
5142 tree res = NULL_TREE;
5143
5144 for (tree *chain = &res; tree value = tree_node ();
5145 chain = &TREE_CHAIN (*chain))
5146 {
5147 tree purpose = has_purpose ? tree_node () : NULL_TREE;
5148 *chain = build_tree_list (purpose, value);
5149 }
5150
5151 return res;
5152}
5153/* Start tree write. Write information to allocate the receiving
5154 node. */
5155
5156void
5157trees_out::start (tree t, bool code_streamed)
5158{
5159 if (TYPE_P (t))
5160 {
5161 enum tree_code code = TREE_CODE (t);
5162 gcc_checking_assert (TYPE_MAIN_VARIANT (t) == t);
5163 /* All these types are TYPE_NON_COMMON. */
5164 gcc_checking_assert (code == RECORD_TYPE
5165 || code == UNION_TYPE
5166 || code == ENUMERAL_TYPE
5167 || code == TEMPLATE_TYPE_PARM
5168 || code == TEMPLATE_TEMPLATE_PARM
5169 || code == BOUND_TEMPLATE_TEMPLATE_PARM);
5170 }
5171
5172 if (!code_streamed)
5173 u (TREE_CODE (t));
5174
5175 switch (TREE_CODE (t))
5176 {
5177 default:
5178 if (VL_EXP_CLASS_P (t))
5179 u (VL_EXP_OPERAND_LENGTH (t));
5180 break;
5181
5182 case INTEGER_CST:
5183 u (TREE_INT_CST_NUNITS (t));
5184 u (TREE_INT_CST_EXT_NUNITS (t));
5185 break;
5186
5187 case OMP_CLAUSE:
5188 state->extensions |= SE_OPENMP;
5189 u (OMP_CLAUSE_CODE (t));
5190 break;
5191
5192 case STRING_CST:
5193 str (TREE_STRING_POINTER (t), TREE_STRING_LENGTH (t));
5194 break;
5195
5196 case VECTOR_CST:
5197 u (VECTOR_CST_LOG2_NPATTERNS (t));
5198 u (VECTOR_CST_NELTS_PER_PATTERN (t));
5199 break;
5200
5201 case TREE_BINFO:
5202 u (BINFO_N_BASE_BINFOS (t));
5203 break;
5204
5205 case TREE_VEC:
5206 u (TREE_VEC_LENGTH (t));
5207 break;
5208
5209 case FIXED_CST:
5210 gcc_unreachable (); /* Not supported in C++. */
5211 break;
5212
5213 case IDENTIFIER_NODE:
5214 case SSA_NAME:
5215 case TARGET_MEM_REF:
5216 case TRANSLATION_UNIT_DECL:
5217 /* We shouldn't meet these. */
5218 gcc_unreachable ();
5219 break;
5220 }
5221}
5222
5223/* Start tree read. Allocate the receiving node. */
5224
5225tree
5226trees_in::start (unsigned code)
5227{
5228 tree t = NULL_TREE;
5229
5230 if (!code)
5231 code = u ();
5232
5233 switch (code)
5234 {
5235 default:
5236 if (code >= MAX_TREE_CODES)
5237 {
5238 fail:
5239 set_overrun ();
5240 return NULL_TREE;
5241 }
5242 else if (TREE_CODE_CLASS (code) == tcc_vl_exp)
5243 {
5244 unsigned ops = u ();
5245 t = build_vl_exp (tree_code (code), ops);
5246 }
5247 else
5248 t = make_node (tree_code (code));
5249 break;
5250
5251 case INTEGER_CST:
5252 {
5253 unsigned n = u ();
5254 unsigned e = u ();
5255 t = make_int_cst (n, e);
5256 }
5257 break;
5258
5259 case OMP_CLAUSE:
5260 {
5261 if (!(state->extensions & SE_OPENMP))
5262 goto fail;
5263
5264 unsigned omp_code = u ();
5265 t = build_omp_clause (UNKNOWN_LOCATION, omp_clause_code (omp_code));
5266 }
5267 break;
5268
5269 case STRING_CST:
5270 {
5271 size_t l;
5272 const char *chars = str (len_p: &l);
5273 t = build_string (l, chars);
5274 }
5275 break;
5276
5277 case VECTOR_CST:
5278 {
5279 unsigned log2_npats = u ();
5280 unsigned elts_per = u ();
5281 t = make_vector (log2_npats, elts_per);
5282 }
5283 break;
5284
5285 case TREE_BINFO:
5286 t = make_tree_binfo (u ());
5287 break;
5288
5289 case TREE_VEC:
5290 t = make_tree_vec (u ());
5291 break;
5292
5293 case FIXED_CST:
5294 case IDENTIFIER_NODE:
5295 case SSA_NAME:
5296 case TARGET_MEM_REF:
5297 case TRANSLATION_UNIT_DECL:
5298 goto fail;
5299 }
5300
5301 return t;
5302}
5303
5304/* The structure streamers access the raw fields, because the
5305 alternative, of using the accessor macros can require using
5306 different accessors for the same underlying field, depending on the
5307 tree code. That's both confusing and annoying. */
5308
5309/* Read & write the core boolean flags. */
5310
5311void
5312trees_out::core_bools (tree t, bits_out& bits)
5313{
5314#define WB(X) (bits.b (X))
5315/* Stream X if COND holds, and if !COND stream a dummy value so that the
5316 overall number of bits streamed is independent of the runtime value
5317 of COND, which allows the compiler to better optimize this function. */
5318#define WB_IF(COND, X) WB ((COND) ? (X) : false)
5319 tree_code code = TREE_CODE (t);
5320
5321 WB (t->base.side_effects_flag);
5322 WB (t->base.constant_flag);
5323 WB (t->base.addressable_flag);
5324 WB (t->base.volatile_flag);
5325 WB (t->base.readonly_flag);
5326 /* base.asm_written_flag is a property of the current TU's use of
5327 this decl. */
5328 WB (t->base.nowarning_flag);
5329 /* base.visited read as zero (it's set for writer, because that's
5330 how we mark nodes). */
5331 /* base.used_flag is not streamed. Readers may set TREE_USED of
5332 decls they use. */
5333 WB (t->base.nothrow_flag);
5334 WB (t->base.static_flag);
5335 /* This is TYPE_CACHED_VALUES_P for types. */
5336 WB_IF (TREE_CODE_CLASS (code) != tcc_type, t->base.public_flag);
5337 WB (t->base.private_flag);
5338 WB (t->base.protected_flag);
5339 WB (t->base.deprecated_flag);
5340 WB (t->base.default_def_flag);
5341
5342 switch (code)
5343 {
5344 case CALL_EXPR:
5345 case INTEGER_CST:
5346 case SSA_NAME:
5347 case TARGET_MEM_REF:
5348 case TREE_VEC:
5349 /* These use different base.u fields. */
5350 return;
5351
5352 default:
5353 WB (t->base.u.bits.lang_flag_0);
5354 bool flag_1 = t->base.u.bits.lang_flag_1;
5355 if (!flag_1)
5356 ;
5357 else if (code == TEMPLATE_INFO)
5358 /* This is TI_PENDING_TEMPLATE_FLAG, not relevant to reader. */
5359 flag_1 = false;
5360 else if (code == VAR_DECL)
5361 {
5362 /* This is DECL_INITIALIZED_P. */
5363 if (TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
5364 /* We'll set this when reading the definition. */
5365 flag_1 = false;
5366 }
5367 WB (flag_1);
5368 WB (t->base.u.bits.lang_flag_2);
5369 WB (t->base.u.bits.lang_flag_3);
5370 WB (t->base.u.bits.lang_flag_4);
5371 WB (t->base.u.bits.lang_flag_5);
5372 WB (t->base.u.bits.lang_flag_6);
5373 WB (t->base.u.bits.saturating_flag);
5374 WB (t->base.u.bits.unsigned_flag);
5375 WB (t->base.u.bits.packed_flag);
5376 WB (t->base.u.bits.user_align);
5377 WB (t->base.u.bits.nameless_flag);
5378 WB (t->base.u.bits.atomic_flag);
5379 WB (t->base.u.bits.unavailable_flag);
5380 break;
5381 }
5382
5383 if (TREE_CODE_CLASS (code) == tcc_type)
5384 {
5385 WB (t->type_common.no_force_blk_flag);
5386 WB (t->type_common.needs_constructing_flag);
5387 WB (t->type_common.transparent_aggr_flag);
5388 WB (t->type_common.restrict_flag);
5389 WB (t->type_common.string_flag);
5390 WB (t->type_common.lang_flag_0);
5391 WB (t->type_common.lang_flag_1);
5392 WB (t->type_common.lang_flag_2);
5393 WB (t->type_common.lang_flag_3);
5394 WB (t->type_common.lang_flag_4);
5395 WB (t->type_common.lang_flag_5);
5396 WB (t->type_common.lang_flag_6);
5397 WB (t->type_common.typeless_storage);
5398 }
5399
5400 if (TREE_CODE_CLASS (code) != tcc_declaration)
5401 return;
5402
5403 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5404 {
5405 WB (t->decl_common.nonlocal_flag);
5406 WB (t->decl_common.virtual_flag);
5407 WB (t->decl_common.ignored_flag);
5408 WB (t->decl_common.abstract_flag);
5409 WB (t->decl_common.artificial_flag);
5410 WB (t->decl_common.preserve_flag);
5411 WB (t->decl_common.debug_expr_is_from);
5412 WB (t->decl_common.lang_flag_0);
5413 WB (t->decl_common.lang_flag_1);
5414 WB (t->decl_common.lang_flag_2);
5415 WB (t->decl_common.lang_flag_3);
5416 WB (t->decl_common.lang_flag_4);
5417
5418 {
5419 /* This is DECL_INTERFACE_KNOWN: We should redetermine whether
5420 we need to import or export any vtables or typeinfo objects
5421 on stream-in. */
5422 bool interface_known = t->decl_common.lang_flag_5;
5423 if (VAR_P (t) && (DECL_VTABLE_OR_VTT_P (t) || DECL_TINFO_P (t)))
5424 interface_known = false;
5425 WB (interface_known);
5426 }
5427
5428 WB (t->decl_common.lang_flag_6);
5429 WB (t->decl_common.lang_flag_7);
5430 WB (t->decl_common.lang_flag_8);
5431 WB (t->decl_common.decl_flag_0);
5432
5433 {
5434 /* DECL_EXTERNAL -> decl_flag_1
5435 == it is defined elsewhere
5436 DECL_NOT_REALLY_EXTERN -> base.not_really_extern
5437 == that was a lie, it is here */
5438
5439 bool is_external = t->decl_common.decl_flag_1;
5440 if (!is_external)
5441 /* decl_flag_1 is DECL_EXTERNAL. Things we emit here, might
5442 well be external from the POV of an importer. */
5443 // FIXME: Do we need to know if this is a TEMPLATE_RESULT --
5444 // a flag from the caller?
5445 switch (code)
5446 {
5447 default:
5448 break;
5449
5450 case VAR_DECL:
5451 if (TREE_PUBLIC (t)
5452 && !(TREE_STATIC (t)
5453 && DECL_FUNCTION_SCOPE_P (t)
5454 && DECL_DECLARED_INLINE_P (DECL_CONTEXT (t)))
5455 && !DECL_VAR_DECLARED_INLINE_P (t))
5456 is_external = true;
5457 break;
5458
5459 case FUNCTION_DECL:
5460 if (TREE_PUBLIC (t)
5461 && !DECL_DECLARED_INLINE_P (t))
5462 is_external = true;
5463 break;
5464 }
5465 WB (is_external);
5466 }
5467
5468 WB (t->decl_common.decl_flag_2);
5469 WB (t->decl_common.decl_flag_3);
5470 WB (t->decl_common.not_gimple_reg_flag);
5471 WB (t->decl_common.decl_by_reference_flag);
5472 WB (t->decl_common.decl_read_flag);
5473 WB (t->decl_common.decl_nonshareable_flag);
5474 WB (t->decl_common.decl_not_flexarray);
5475 }
5476 else
5477 return;
5478
5479 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5480 {
5481 WB (t->decl_with_vis.defer_output);
5482 WB (t->decl_with_vis.hard_register);
5483 WB (t->decl_with_vis.common_flag);
5484 WB (t->decl_with_vis.in_text_section);
5485 WB (t->decl_with_vis.in_constant_pool);
5486 WB (t->decl_with_vis.dllimport_flag);
5487 WB (t->decl_with_vis.weak_flag);
5488 WB (t->decl_with_vis.seen_in_bind_expr);
5489 WB (t->decl_with_vis.comdat_flag);
5490 WB (t->decl_with_vis.visibility_specified);
5491 WB (t->decl_with_vis.init_priority_p);
5492 WB (t->decl_with_vis.shadowed_for_var_p);
5493 WB (t->decl_with_vis.cxx_constructor);
5494 WB (t->decl_with_vis.cxx_destructor);
5495 WB (t->decl_with_vis.final);
5496 WB (t->decl_with_vis.regdecl_flag);
5497 }
5498 else
5499 return;
5500
5501 if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
5502 {
5503 WB (t->function_decl.static_ctor_flag);
5504 WB (t->function_decl.static_dtor_flag);
5505 WB (t->function_decl.uninlinable);
5506 WB (t->function_decl.possibly_inlined);
5507 WB (t->function_decl.novops_flag);
5508 WB (t->function_decl.returns_twice_flag);
5509 WB (t->function_decl.malloc_flag);
5510 WB (t->function_decl.declared_inline_flag);
5511 WB (t->function_decl.no_inline_warning_flag);
5512 WB (t->function_decl.no_instrument_function_entry_exit);
5513 WB (t->function_decl.no_limit_stack);
5514 WB (t->function_decl.disregard_inline_limits);
5515 WB (t->function_decl.pure_flag);
5516 WB (t->function_decl.looping_const_or_pure_flag);
5517
5518 WB (t->function_decl.has_debug_args_flag);
5519 WB (t->function_decl.versioned_function);
5520
5521 /* decl_type is a (misnamed) 2 bit discriminator. */
5522 unsigned kind = t->function_decl.decl_type;
5523 WB ((kind >> 0) & 1);
5524 WB ((kind >> 1) & 1);
5525 }
5526#undef WB_IF
5527#undef WB
5528}
5529
5530bool
5531trees_in::core_bools (tree t, bits_in& bits)
5532{
5533#define RB(X) ((X) = bits.b ())
5534/* See the comment for WB_IF in trees_out::core_bools. */
5535#define RB_IF(COND, X) ((COND) ? RB (X) : bits.b ())
5536
5537 tree_code code = TREE_CODE (t);
5538
5539 RB (t->base.side_effects_flag);
5540 RB (t->base.constant_flag);
5541 RB (t->base.addressable_flag);
5542 RB (t->base.volatile_flag);
5543 RB (t->base.readonly_flag);
5544 /* base.asm_written_flag is not streamed. */
5545 RB (t->base.nowarning_flag);
5546 /* base.visited is not streamed. */
5547 /* base.used_flag is not streamed. */
5548 RB (t->base.nothrow_flag);
5549 RB (t->base.static_flag);
5550 RB_IF (TREE_CODE_CLASS (code) != tcc_type, t->base.public_flag);
5551 RB (t->base.private_flag);
5552 RB (t->base.protected_flag);
5553 RB (t->base.deprecated_flag);
5554 RB (t->base.default_def_flag);
5555
5556 switch (code)
5557 {
5558 case CALL_EXPR:
5559 case INTEGER_CST:
5560 case SSA_NAME:
5561 case TARGET_MEM_REF:
5562 case TREE_VEC:
5563 /* These use different base.u fields. */
5564 goto done;
5565
5566 default:
5567 RB (t->base.u.bits.lang_flag_0);
5568 RB (t->base.u.bits.lang_flag_1);
5569 RB (t->base.u.bits.lang_flag_2);
5570 RB (t->base.u.bits.lang_flag_3);
5571 RB (t->base.u.bits.lang_flag_4);
5572 RB (t->base.u.bits.lang_flag_5);
5573 RB (t->base.u.bits.lang_flag_6);
5574 RB (t->base.u.bits.saturating_flag);
5575 RB (t->base.u.bits.unsigned_flag);
5576 RB (t->base.u.bits.packed_flag);
5577 RB (t->base.u.bits.user_align);
5578 RB (t->base.u.bits.nameless_flag);
5579 RB (t->base.u.bits.atomic_flag);
5580 RB (t->base.u.bits.unavailable_flag);
5581 break;
5582 }
5583
5584 if (TREE_CODE_CLASS (code) == tcc_type)
5585 {
5586 RB (t->type_common.no_force_blk_flag);
5587 RB (t->type_common.needs_constructing_flag);
5588 RB (t->type_common.transparent_aggr_flag);
5589 RB (t->type_common.restrict_flag);
5590 RB (t->type_common.string_flag);
5591 RB (t->type_common.lang_flag_0);
5592 RB (t->type_common.lang_flag_1);
5593 RB (t->type_common.lang_flag_2);
5594 RB (t->type_common.lang_flag_3);
5595 RB (t->type_common.lang_flag_4);
5596 RB (t->type_common.lang_flag_5);
5597 RB (t->type_common.lang_flag_6);
5598 RB (t->type_common.typeless_storage);
5599 }
5600
5601 if (TREE_CODE_CLASS (code) != tcc_declaration)
5602 goto done;
5603
5604 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
5605 {
5606 RB (t->decl_common.nonlocal_flag);
5607 RB (t->decl_common.virtual_flag);
5608 RB (t->decl_common.ignored_flag);
5609 RB (t->decl_common.abstract_flag);
5610 RB (t->decl_common.artificial_flag);
5611 RB (t->decl_common.preserve_flag);
5612 RB (t->decl_common.debug_expr_is_from);
5613 RB (t->decl_common.lang_flag_0);
5614 RB (t->decl_common.lang_flag_1);
5615 RB (t->decl_common.lang_flag_2);
5616 RB (t->decl_common.lang_flag_3);
5617 RB (t->decl_common.lang_flag_4);
5618 RB (t->decl_common.lang_flag_5);
5619 RB (t->decl_common.lang_flag_6);
5620 RB (t->decl_common.lang_flag_7);
5621 RB (t->decl_common.lang_flag_8);
5622 RB (t->decl_common.decl_flag_0);
5623 RB (t->decl_common.decl_flag_1);
5624 RB (t->decl_common.decl_flag_2);
5625 RB (t->decl_common.decl_flag_3);
5626 RB (t->decl_common.not_gimple_reg_flag);
5627 RB (t->decl_common.decl_by_reference_flag);
5628 RB (t->decl_common.decl_read_flag);
5629 RB (t->decl_common.decl_nonshareable_flag);
5630 RB (t->decl_common.decl_not_flexarray);
5631 }
5632 else
5633 goto done;
5634
5635 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
5636 {
5637 RB (t->decl_with_vis.defer_output);
5638 RB (t->decl_with_vis.hard_register);
5639 RB (t->decl_with_vis.common_flag);
5640 RB (t->decl_with_vis.in_text_section);
5641 RB (t->decl_with_vis.in_constant_pool);
5642 RB (t->decl_with_vis.dllimport_flag);
5643 RB (t->decl_with_vis.weak_flag);
5644 RB (t->decl_with_vis.seen_in_bind_expr);
5645 RB (t->decl_with_vis.comdat_flag);
5646 RB (t->decl_with_vis.visibility_specified);
5647 RB (t->decl_with_vis.init_priority_p);
5648 RB (t->decl_with_vis.shadowed_for_var_p);
5649 RB (t->decl_with_vis.cxx_constructor);
5650 RB (t->decl_with_vis.cxx_destructor);
5651 RB (t->decl_with_vis.final);
5652 RB (t->decl_with_vis.regdecl_flag);
5653 }
5654 else
5655 goto done;
5656
5657 if (CODE_CONTAINS_STRUCT (code, TS_FUNCTION_DECL))
5658 {
5659 RB (t->function_decl.static_ctor_flag);
5660 RB (t->function_decl.static_dtor_flag);
5661 RB (t->function_decl.uninlinable);
5662 RB (t->function_decl.possibly_inlined);
5663 RB (t->function_decl.novops_flag);
5664 RB (t->function_decl.returns_twice_flag);
5665 RB (t->function_decl.malloc_flag);
5666 RB (t->function_decl.declared_inline_flag);
5667 RB (t->function_decl.no_inline_warning_flag);
5668 RB (t->function_decl.no_instrument_function_entry_exit);
5669 RB (t->function_decl.no_limit_stack);
5670 RB (t->function_decl.disregard_inline_limits);
5671 RB (t->function_decl.pure_flag);
5672 RB (t->function_decl.looping_const_or_pure_flag);
5673
5674 RB (t->function_decl.has_debug_args_flag);
5675 RB (t->function_decl.versioned_function);
5676
5677 /* decl_type is a (misnamed) 2 bit discriminator. */
5678 unsigned kind = 0;
5679 kind |= unsigned (bits.b ()) << 0;
5680 kind |= unsigned (bits.b ()) << 1;
5681 t->function_decl.decl_type = function_decl_type (kind);
5682 }
5683#undef RB_IF
5684#undef RB
5685done:
5686 return !get_overrun ();
5687}
5688
5689void
5690trees_out::lang_decl_bools (tree t, bits_out& bits)
5691{
5692#define WB(X) (bits.b (X))
5693 const struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
5694
5695 bits.bflush ();
5696 WB (lang->u.base.language == lang_cplusplus);
5697 WB ((lang->u.base.use_template >> 0) & 1);
5698 WB ((lang->u.base.use_template >> 1) & 1);
5699 /* Do not write lang->u.base.not_really_extern, importer will set
5700 when reading the definition (if any). */
5701 WB (lang->u.base.initialized_in_class);
5702 WB (lang->u.base.threadprivate_or_deleted_p);
5703 /* Do not write lang->u.base.anticipated_p, it is a property of the
5704 current TU. */
5705 WB (lang->u.base.friend_or_tls);
5706 WB (lang->u.base.unknown_bound_p);
5707 /* Do not write lang->u.base.odr_used, importer will recalculate if
5708 they do ODR use this decl. */
5709 WB (lang->u.base.concept_p);
5710 WB (lang->u.base.var_declared_inline_p);
5711 WB (lang->u.base.dependent_init_p);
5712 /* When building a header unit, everthing is marked as purview, (so
5713 we know which decls to write). But when we import them we do not
5714 want to mark them as in module purview. */
5715 WB (lang->u.base.module_purview_p && !header_module_p ());
5716 WB (lang->u.base.module_attach_p);
5717 WB (lang->u.base.module_keyed_decls_p);
5718 switch (lang->u.base.selector)
5719 {
5720 default:
5721 gcc_unreachable ();
5722
5723 case lds_fn: /* lang_decl_fn. */
5724 WB (lang->u.fn.global_ctor_p);
5725 WB (lang->u.fn.global_dtor_p);
5726 WB (lang->u.fn.static_function);
5727 WB (lang->u.fn.pure_virtual);
5728 WB (lang->u.fn.defaulted_p);
5729 WB (lang->u.fn.has_in_charge_parm_p);
5730 WB (lang->u.fn.has_vtt_parm_p);
5731 /* There shouldn't be a pending inline at this point. */
5732 gcc_assert (!lang->u.fn.pending_inline_p);
5733 WB (lang->u.fn.nonconverting);
5734 WB (lang->u.fn.thunk_p);
5735 WB (lang->u.fn.this_thunk_p);
5736 /* Do not stream lang->u.hidden_friend_p, it is a property of
5737 the TU. */
5738 WB (lang->u.fn.omp_declare_reduction_p);
5739 WB (lang->u.fn.has_dependent_explicit_spec_p);
5740 WB (lang->u.fn.immediate_fn_p);
5741 WB (lang->u.fn.maybe_deleted);
5742 /* We do not stream lang->u.fn.implicit_constexpr. */
5743 WB (lang->u.fn.escalated_p);
5744 WB (lang->u.fn.xobj_func);
5745 goto lds_min;
5746
5747 case lds_decomp: /* lang_decl_decomp. */
5748 /* No bools. */
5749 goto lds_min;
5750
5751 case lds_min: /* lang_decl_min. */
5752 lds_min:
5753 /* No bools. */
5754 break;
5755
5756 case lds_ns: /* lang_decl_ns. */
5757 /* No bools. */
5758 break;
5759
5760 case lds_parm: /* lang_decl_parm. */
5761 /* No bools. */
5762 break;
5763 }
5764#undef WB
5765}
5766
5767bool
5768trees_in::lang_decl_bools (tree t, bits_in& bits)
5769{
5770#define RB(X) ((X) = bits.b ())
5771 struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
5772
5773 bits.bflush ();
5774 lang->u.base.language = bits.b () ? lang_cplusplus : lang_c;
5775 unsigned v;
5776 v = bits.b () << 0;
5777 v |= bits.b () << 1;
5778 lang->u.base.use_template = v;
5779 /* lang->u.base.not_really_extern is not streamed. */
5780 RB (lang->u.base.initialized_in_class);
5781 RB (lang->u.base.threadprivate_or_deleted_p);
5782 /* lang->u.base.anticipated_p is not streamed. */
5783 RB (lang->u.base.friend_or_tls);
5784 RB (lang->u.base.unknown_bound_p);
5785 /* lang->u.base.odr_used is not streamed. */
5786 RB (lang->u.base.concept_p);
5787 RB (lang->u.base.var_declared_inline_p);
5788 RB (lang->u.base.dependent_init_p);
5789 RB (lang->u.base.module_purview_p);
5790 RB (lang->u.base.module_attach_p);
5791 RB (lang->u.base.module_keyed_decls_p);
5792 switch (lang->u.base.selector)
5793 {
5794 default:
5795 gcc_unreachable ();
5796
5797 case lds_fn: /* lang_decl_fn. */
5798 RB (lang->u.fn.global_ctor_p);
5799 RB (lang->u.fn.global_dtor_p);
5800 RB (lang->u.fn.static_function);
5801 RB (lang->u.fn.pure_virtual);
5802 RB (lang->u.fn.defaulted_p);
5803 RB (lang->u.fn.has_in_charge_parm_p);
5804 RB (lang->u.fn.has_vtt_parm_p);
5805 RB (lang->u.fn.nonconverting);
5806 RB (lang->u.fn.thunk_p);
5807 RB (lang->u.fn.this_thunk_p);
5808 /* lang->u.fn.hidden_friend_p is not streamed. */
5809 RB (lang->u.fn.omp_declare_reduction_p);
5810 RB (lang->u.fn.has_dependent_explicit_spec_p);
5811 RB (lang->u.fn.immediate_fn_p);
5812 RB (lang->u.fn.maybe_deleted);
5813 /* We do not stream lang->u.fn.implicit_constexpr. */
5814 RB (lang->u.fn.escalated_p);
5815 RB (lang->u.fn.xobj_func);
5816 goto lds_min;
5817
5818 case lds_decomp: /* lang_decl_decomp. */
5819 /* No bools. */
5820 goto lds_min;
5821
5822 case lds_min: /* lang_decl_min. */
5823 lds_min:
5824 /* No bools. */
5825 break;
5826
5827 case lds_ns: /* lang_decl_ns. */
5828 /* No bools. */
5829 break;
5830
5831 case lds_parm: /* lang_decl_parm. */
5832 /* No bools. */
5833 break;
5834 }
5835#undef RB
5836 return !get_overrun ();
5837}
5838
5839void
5840trees_out::lang_type_bools (tree t, bits_out& bits)
5841{
5842#define WB(X) (bits.b (X))
5843 const struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
5844
5845 bits.bflush ();
5846 WB (lang->has_type_conversion);
5847 WB (lang->has_copy_ctor);
5848 WB (lang->has_default_ctor);
5849 WB (lang->const_needs_init);
5850 WB (lang->ref_needs_init);
5851 WB (lang->has_const_copy_assign);
5852 WB ((lang->use_template >> 0) & 1);
5853 WB ((lang->use_template >> 1) & 1);
5854
5855 WB (lang->has_mutable);
5856 WB (lang->com_interface);
5857 WB (lang->non_pod_class);
5858 WB (lang->nearly_empty_p);
5859 WB (lang->user_align);
5860 WB (lang->has_copy_assign);
5861 WB (lang->has_new);
5862 WB (lang->has_array_new);
5863
5864 WB ((lang->gets_delete >> 0) & 1);
5865 WB ((lang->gets_delete >> 1) & 1);
5866 WB (lang->interface_only);
5867 WB (lang->interface_unknown);
5868 WB (lang->contains_empty_class_p);
5869 WB (lang->anon_aggr);
5870 WB (lang->non_zero_init);
5871 WB (lang->empty_p);
5872
5873 WB (lang->vec_new_uses_cookie);
5874 WB (lang->declared_class);
5875 WB (lang->diamond_shaped);
5876 WB (lang->repeated_base);
5877 gcc_assert (!lang->being_defined);
5878 // lang->debug_requested
5879 WB (lang->fields_readonly);
5880 WB (lang->ptrmemfunc_flag);
5881
5882 WB (lang->lazy_default_ctor);
5883 WB (lang->lazy_copy_ctor);
5884 WB (lang->lazy_copy_assign);
5885 WB (lang->lazy_destructor);
5886 WB (lang->has_const_copy_ctor);
5887 WB (lang->has_complex_copy_ctor);
5888 WB (lang->has_complex_copy_assign);
5889 WB (lang->non_aggregate);
5890
5891 WB (lang->has_complex_dflt);
5892 WB (lang->has_list_ctor);
5893 WB (lang->non_std_layout);
5894 WB (lang->is_literal);
5895 WB (lang->lazy_move_ctor);
5896 WB (lang->lazy_move_assign);
5897 WB (lang->has_complex_move_ctor);
5898 WB (lang->has_complex_move_assign);
5899
5900 WB (lang->has_constexpr_ctor);
5901 WB (lang->unique_obj_representations);
5902 WB (lang->unique_obj_representations_set);
5903#undef WB
5904}
5905
5906bool
5907trees_in::lang_type_bools (tree t, bits_in& bits)
5908{
5909#define RB(X) ((X) = bits.b ())
5910 struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
5911
5912 bits.bflush ();
5913 RB (lang->has_type_conversion);
5914 RB (lang->has_copy_ctor);
5915 RB (lang->has_default_ctor);
5916 RB (lang->const_needs_init);
5917 RB (lang->ref_needs_init);
5918 RB (lang->has_const_copy_assign);
5919 unsigned v;
5920 v = bits.b () << 0;
5921 v |= bits.b () << 1;
5922 lang->use_template = v;
5923
5924 RB (lang->has_mutable);
5925 RB (lang->com_interface);
5926 RB (lang->non_pod_class);
5927 RB (lang->nearly_empty_p);
5928 RB (lang->user_align);
5929 RB (lang->has_copy_assign);
5930 RB (lang->has_new);
5931 RB (lang->has_array_new);
5932
5933 v = bits.b () << 0;
5934 v |= bits.b () << 1;
5935 lang->gets_delete = v;
5936 RB (lang->interface_only);
5937 RB (lang->interface_unknown);
5938 RB (lang->contains_empty_class_p);
5939 RB (lang->anon_aggr);
5940 RB (lang->non_zero_init);
5941 RB (lang->empty_p);
5942
5943 RB (lang->vec_new_uses_cookie);
5944 RB (lang->declared_class);
5945 RB (lang->diamond_shaped);
5946 RB (lang->repeated_base);
5947 gcc_assert (!lang->being_defined);
5948 gcc_assert (!lang->debug_requested);
5949 RB (lang->fields_readonly);
5950 RB (lang->ptrmemfunc_flag);
5951
5952 RB (lang->lazy_default_ctor);
5953 RB (lang->lazy_copy_ctor);
5954 RB (lang->lazy_copy_assign);
5955 RB (lang->lazy_destructor);
5956 RB (lang->has_const_copy_ctor);
5957 RB (lang->has_complex_copy_ctor);
5958 RB (lang->has_complex_copy_assign);
5959 RB (lang->non_aggregate);
5960
5961 RB (lang->has_complex_dflt);
5962 RB (lang->has_list_ctor);
5963 RB (lang->non_std_layout);
5964 RB (lang->is_literal);
5965 RB (lang->lazy_move_ctor);
5966 RB (lang->lazy_move_assign);
5967 RB (lang->has_complex_move_ctor);
5968 RB (lang->has_complex_move_assign);
5969
5970 RB (lang->has_constexpr_ctor);
5971 RB (lang->unique_obj_representations);
5972 RB (lang->unique_obj_representations_set);
5973#undef RB
5974 return !get_overrun ();
5975}
5976
5977/* Read & write the core values and pointers. */
5978
5979void
5980trees_out::core_vals (tree t)
5981{
5982#define WU(X) (u (X))
5983#define WT(X) (tree_node (X))
5984 tree_code code = TREE_CODE (t);
5985
5986 /* First by shape of the tree. */
5987
5988 if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
5989 {
5990 /* Write this early, for better log information. */
5991 WT (t->decl_minimal.name);
5992 if (!DECL_TEMPLATE_PARM_P (t))
5993 WT (t->decl_minimal.context);
5994
5995 if (state)
5996 state->write_location (*this, t->decl_minimal.locus);
5997 }
5998
5999 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
6000 {
6001 /* The only types we write also have TYPE_NON_COMMON. */
6002 gcc_checking_assert (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON));
6003
6004 /* We only stream the main variant. */
6005 gcc_checking_assert (TYPE_MAIN_VARIANT (t) == t);
6006
6007 /* Stream the name & context first, for better log information */
6008 WT (t->type_common.name);
6009 WT (t->type_common.context);
6010
6011 /* By construction we want to make sure we have the canonical
6012 and main variants already in the type table, so emit them
6013 now. */
6014 WT (t->type_common.main_variant);
6015
6016 tree canonical = t->type_common.canonical;
6017 if (canonical && DECL_TEMPLATE_PARM_P (TYPE_NAME (t)))
6018 /* We do not want to wander into different templates.
6019 Reconstructed on stream in. */
6020 canonical = t;
6021 WT (canonical);
6022
6023 /* type_common.next_variant is internally manipulated. */
6024 /* type_common.pointer_to, type_common.reference_to. */
6025
6026 if (streaming_p ())
6027 {
6028 WU (t->type_common.precision);
6029 WU (t->type_common.contains_placeholder_bits);
6030 WU (t->type_common.mode);
6031 WU (t->type_common.align);
6032 }
6033
6034 if (!RECORD_OR_UNION_CODE_P (code))
6035 {
6036 WT (t->type_common.size);
6037 WT (t->type_common.size_unit);
6038 }
6039 WT (t->type_common.attributes);
6040
6041 WT (t->type_common.common.chain); /* TYPE_STUB_DECL. */
6042 }
6043
6044 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
6045 {
6046 if (streaming_p ())
6047 {
6048 WU (t->decl_common.mode);
6049 WU (t->decl_common.off_align);
6050 WU (t->decl_common.align);
6051 }
6052
6053 /* For templates these hold instantiation (partial and/or
6054 specialization) information. */
6055 if (code != TEMPLATE_DECL)
6056 {
6057 WT (t->decl_common.size);
6058 WT (t->decl_common.size_unit);
6059 }
6060
6061 WT (t->decl_common.attributes);
6062 // FIXME: Does this introduce cross-decl links? For instance
6063 // from instantiation to the template. If so, we'll need more
6064 // deduplication logic. I think we'll need to walk the blocks
6065 // of the owning function_decl's abstract origin in tandem, to
6066 // generate the locating data needed?
6067 WT (t->decl_common.abstract_origin);
6068 }
6069
6070 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
6071 {
6072 WT (t->decl_with_vis.assembler_name);
6073 if (streaming_p ())
6074 WU (t->decl_with_vis.visibility);
6075 }
6076
6077 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
6078 {
6079 if (code == ENUMERAL_TYPE)
6080 {
6081 /* These fields get set even for opaque enums that lack a
6082 definition, so we stream them directly for each ENUMERAL_TYPE.
6083 We stream TYPE_VALUES as part of the definition. */
6084 WT (t->type_non_common.maxval);
6085 WT (t->type_non_common.minval);
6086 }
6087 /* Records and unions hold FIELDS, VFIELD & BINFO on these
6088 things. */
6089 else if (!RECORD_OR_UNION_CODE_P (code))
6090 {
6091 // FIXME: These are from tpl_parm_value's 'type' writing.
6092 // Perhaps it should just be doing them directly?
6093 gcc_checking_assert (code == TEMPLATE_TYPE_PARM
6094 || code == TEMPLATE_TEMPLATE_PARM
6095 || code == BOUND_TEMPLATE_TEMPLATE_PARM);
6096 gcc_checking_assert (!TYPE_CACHED_VALUES_P (t));
6097 WT (t->type_non_common.values);
6098 WT (t->type_non_common.maxval);
6099 WT (t->type_non_common.minval);
6100 }
6101
6102 WT (t->type_non_common.lang_1);
6103 }
6104
6105 if (CODE_CONTAINS_STRUCT (code, TS_EXP))
6106 {
6107 if (state)
6108 state->write_location (*this, t->exp.locus);
6109
6110 /* Walk in forward order, as (for instance) REQUIRES_EXPR has a
6111 bunch of unscoped parms on its first operand. It's safer to
6112 create those in order. */
6113 bool vl = TREE_CODE_CLASS (code) == tcc_vl_exp;
6114 for (unsigned limit = (vl ? VL_EXP_OPERAND_LENGTH (t)
6115 : TREE_OPERAND_LENGTH (t)),
6116 ix = unsigned (vl); ix != limit; ix++)
6117 WT (TREE_OPERAND (t, ix));
6118 }
6119 else
6120 /* The CODE_CONTAINS tables were inaccurate when I started. */
6121 gcc_checking_assert (TREE_CODE_CLASS (code) != tcc_expression
6122 && TREE_CODE_CLASS (code) != tcc_binary
6123 && TREE_CODE_CLASS (code) != tcc_unary
6124 && TREE_CODE_CLASS (code) != tcc_reference
6125 && TREE_CODE_CLASS (code) != tcc_comparison
6126 && TREE_CODE_CLASS (code) != tcc_statement
6127 && TREE_CODE_CLASS (code) != tcc_vl_exp);
6128
6129 /* Then by CODE. Special cases and/or 1:1 tree shape
6130 correspondance. */
6131 switch (code)
6132 {
6133 default:
6134 break;
6135
6136 case ARGUMENT_PACK_SELECT: /* Transient during instantiation. */
6137 case DEFERRED_PARSE: /* Expanded upon completion of
6138 outermost class. */
6139 case IDENTIFIER_NODE: /* Streamed specially. */
6140 case BINDING_VECTOR: /* Only in namespace-scope symbol
6141 table. */
6142 case SSA_NAME:
6143 case TRANSLATION_UNIT_DECL: /* There is only one, it is a
6144 global_tree. */
6145 case USERDEF_LITERAL: /* Expanded during parsing. */
6146 gcc_unreachable (); /* Should never meet. */
6147
6148 /* Constants. */
6149 case COMPLEX_CST:
6150 WT (TREE_REALPART (t));
6151 WT (TREE_IMAGPART (t));
6152 break;
6153
6154 case FIXED_CST:
6155 gcc_unreachable (); /* Not supported in C++. */
6156
6157 case INTEGER_CST:
6158 if (streaming_p ())
6159 {
6160 unsigned num = TREE_INT_CST_EXT_NUNITS (t);
6161 for (unsigned ix = 0; ix != num; ix++)
6162 wu (TREE_INT_CST_ELT (t, ix));
6163 }
6164 break;
6165
6166 case POLY_INT_CST:
6167 if (streaming_p ())
6168 for (unsigned ix = 0; ix != NUM_POLY_INT_COEFFS; ix++)
6169 WT (POLY_INT_CST_COEFF (t, ix));
6170 break;
6171
6172 case REAL_CST:
6173 if (streaming_p ())
6174 buf (TREE_REAL_CST_PTR (t), len: sizeof (real_value));
6175 break;
6176
6177 case STRING_CST:
6178 /* Streamed during start. */
6179 break;
6180
6181 case VECTOR_CST:
6182 for (unsigned ix = vector_cst_encoded_nelts (t); ix--;)
6183 WT (VECTOR_CST_ENCODED_ELT (t, ix));
6184 break;
6185
6186 /* Decls. */
6187 case VAR_DECL:
6188 if (DECL_CONTEXT (t)
6189 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
6190 break;
6191 /* FALLTHROUGH */
6192
6193 case RESULT_DECL:
6194 case PARM_DECL:
6195 if (DECL_HAS_VALUE_EXPR_P (t))
6196 WT (DECL_VALUE_EXPR (t));
6197 /* FALLTHROUGH */
6198
6199 case CONST_DECL:
6200 case IMPORTED_DECL:
6201 WT (t->decl_common.initial);
6202 break;
6203
6204 case FIELD_DECL:
6205 WT (t->field_decl.offset);
6206 WT (t->field_decl.bit_field_type);
6207 WT (t->field_decl.qualifier); /* bitfield unit. */
6208 WT (t->field_decl.bit_offset);
6209 WT (t->field_decl.fcontext);
6210 WT (t->decl_common.initial);
6211 break;
6212
6213 case LABEL_DECL:
6214 if (streaming_p ())
6215 {
6216 WU (t->label_decl.label_decl_uid);
6217 WU (t->label_decl.eh_landing_pad_nr);
6218 }
6219 break;
6220
6221 case FUNCTION_DECL:
6222 if (streaming_p ())
6223 {
6224 /* Builtins can be streamed by value when a header declares
6225 them. */
6226 WU (DECL_BUILT_IN_CLASS (t));
6227 if (DECL_BUILT_IN_CLASS (t) != NOT_BUILT_IN)
6228 WU (DECL_UNCHECKED_FUNCTION_CODE (t));
6229 }
6230
6231 WT (t->function_decl.personality);
6232 WT (t->function_decl.function_specific_target);
6233 WT (t->function_decl.function_specific_optimization);
6234 WT (t->function_decl.vindex);
6235
6236 if (DECL_HAS_DEPENDENT_EXPLICIT_SPEC_P (t))
6237 WT (lookup_explicit_specifier (t));
6238 break;
6239
6240 case USING_DECL:
6241 /* USING_DECL_DECLS */
6242 WT (t->decl_common.initial);
6243 /* FALLTHROUGH */
6244
6245 case TYPE_DECL:
6246 /* USING_DECL: USING_DECL_SCOPE */
6247 /* TYPE_DECL: DECL_ORIGINAL_TYPE */
6248 WT (t->decl_non_common.result);
6249 break;
6250
6251 /* Miscellaneous common nodes. */
6252 case BLOCK:
6253 if (state)
6254 {
6255 state->write_location (*this, t->block.locus);
6256 state->write_location (*this, t->block.end_locus);
6257 }
6258
6259 /* DECL_LOCAL_DECL_P decls are first encountered here and
6260 streamed by value. */
6261 for (tree decls = t->block.vars; decls; decls = DECL_CHAIN (decls))
6262 {
6263 if (VAR_OR_FUNCTION_DECL_P (decls)
6264 && DECL_LOCAL_DECL_P (decls))
6265 {
6266 /* Make sure this is the first encounter, and mark for
6267 walk-by-value. */
6268 gcc_checking_assert (!TREE_VISITED (decls)
6269 && !DECL_TEMPLATE_INFO (decls));
6270 mark_by_value (decl: decls);
6271 }
6272 tree_node (decls);
6273 }
6274 tree_node (NULL_TREE);
6275
6276 /* nonlocalized_vars is a middle-end thing. */
6277 WT (t->block.subblocks);
6278 WT (t->block.supercontext);
6279 // FIXME: As for decl's abstract_origin, does this introduce crosslinks?
6280 WT (t->block.abstract_origin);
6281 /* fragment_origin, fragment_chain are middle-end things. */
6282 WT (t->block.chain);
6283 /* nonlocalized_vars, block_num & die are middle endy/debug
6284 things. */
6285 break;
6286
6287 case CALL_EXPR:
6288 if (streaming_p ())
6289 WU (t->base.u.ifn);
6290 break;
6291
6292 case CONSTRUCTOR:
6293 // This must be streamed /after/ we've streamed the type,
6294 // because it can directly refer to elements of the type. Eg,
6295 // FIELD_DECLs of a RECORD_TYPE.
6296 break;
6297
6298 case OMP_CLAUSE:
6299 {
6300 /* The ompcode is serialized in start. */
6301 if (streaming_p ())
6302 WU (t->omp_clause.subcode.map_kind);
6303 if (state)
6304 state->write_location (*this, t->omp_clause.locus);
6305
6306 unsigned len = omp_clause_num_ops[OMP_CLAUSE_CODE (t)];
6307 for (unsigned ix = 0; ix != len; ix++)
6308 WT (t->omp_clause.ops[ix]);
6309 }
6310 break;
6311
6312 case STATEMENT_LIST:
6313 for (tree stmt : tsi_range (t))
6314 if (stmt)
6315 WT (stmt);
6316 WT (NULL_TREE);
6317 break;
6318
6319 case OPTIMIZATION_NODE:
6320 case TARGET_OPTION_NODE:
6321 // FIXME: Our representation for these two nodes is a cache of
6322 // the resulting set of options. Not a record of the options
6323 // that got changed by a particular attribute or pragma. Should
6324 // we record that, or should we record the diff from the command
6325 // line options? The latter seems the right behaviour, but is
6326 // (a) harder, and I guess could introduce strangeness if the
6327 // importer has set some incompatible set of optimization flags?
6328 gcc_unreachable ();
6329 break;
6330
6331 case TREE_BINFO:
6332 {
6333 WT (t->binfo.common.chain);
6334 WT (t->binfo.offset);
6335 WT (t->binfo.inheritance);
6336 WT (t->binfo.vptr_field);
6337
6338 WT (t->binfo.vtable);
6339 WT (t->binfo.virtuals);
6340 WT (t->binfo.vtt_subvtt);
6341 WT (t->binfo.vtt_vptr);
6342
6343 tree_vec (BINFO_BASE_ACCESSES (t));
6344 unsigned num = vec_safe_length (BINFO_BASE_ACCESSES (t));
6345 for (unsigned ix = 0; ix != num; ix++)
6346 WT (BINFO_BASE_BINFO (t, ix));
6347 }
6348 break;
6349
6350 case TREE_LIST:
6351 WT (t->list.purpose);
6352 WT (t->list.value);
6353 WT (t->list.common.chain);
6354 break;
6355
6356 case TREE_VEC:
6357 for (unsigned ix = TREE_VEC_LENGTH (t); ix--;)
6358 WT (TREE_VEC_ELT (t, ix));
6359 /* We stash NON_DEFAULT_TEMPLATE_ARGS_COUNT on TREE_CHAIN! */
6360 gcc_checking_assert (!t->type_common.common.chain
6361 || (TREE_CODE (t->type_common.common.chain)
6362 == INTEGER_CST));
6363 WT (t->type_common.common.chain);
6364 break;
6365
6366 /* C++-specific nodes ... */
6367 case BASELINK:
6368 WT (((lang_tree_node *)t)->baselink.binfo);
6369 WT (((lang_tree_node *)t)->baselink.functions);
6370 WT (((lang_tree_node *)t)->baselink.access_binfo);
6371 break;
6372
6373 case CONSTRAINT_INFO:
6374 WT (((lang_tree_node *)t)->constraint_info.template_reqs);
6375 WT (((lang_tree_node *)t)->constraint_info.declarator_reqs);
6376 WT (((lang_tree_node *)t)->constraint_info.associated_constr);
6377 break;
6378
6379 case DEFERRED_NOEXCEPT:
6380 WT (((lang_tree_node *)t)->deferred_noexcept.pattern);
6381 WT (((lang_tree_node *)t)->deferred_noexcept.args);
6382 break;
6383
6384 case LAMBDA_EXPR:
6385 WT (((lang_tree_node *)t)->lambda_expression.capture_list);
6386 WT (((lang_tree_node *)t)->lambda_expression.this_capture);
6387 WT (((lang_tree_node *)t)->lambda_expression.extra_scope);
6388 WT (((lang_tree_node *)t)->lambda_expression.regen_info);
6389 WT (((lang_tree_node *)t)->lambda_expression.extra_args);
6390 /* pending_proxies is a parse-time thing. */
6391 gcc_assert (!((lang_tree_node *)t)->lambda_expression.pending_proxies);
6392 if (state)
6393 state->write_location
6394 (*this, ((lang_tree_node *)t)->lambda_expression.locus);
6395 if (streaming_p ())
6396 {
6397 WU (((lang_tree_node *)t)->lambda_expression.default_capture_mode);
6398 WU (((lang_tree_node *)t)->lambda_expression.discriminator_scope);
6399 WU (((lang_tree_node *)t)->lambda_expression.discriminator_sig);
6400 }
6401 break;
6402
6403 case OVERLOAD:
6404 WT (((lang_tree_node *)t)->overload.function);
6405 WT (t->common.chain);
6406 break;
6407
6408 case PTRMEM_CST:
6409 WT (((lang_tree_node *)t)->ptrmem.member);
6410 break;
6411
6412 case STATIC_ASSERT:
6413 WT (((lang_tree_node *)t)->static_assertion.condition);
6414 WT (((lang_tree_node *)t)->static_assertion.message);
6415 if (state)
6416 state->write_location
6417 (*this, ((lang_tree_node *)t)->static_assertion.location);
6418 break;
6419
6420 case TEMPLATE_DECL:
6421 /* Streamed with the template_decl node itself. */
6422 gcc_checking_assert
6423 (TREE_VISITED (((lang_tree_node *)t)->template_decl.arguments));
6424 gcc_checking_assert
6425 (TREE_VISITED (((lang_tree_node *)t)->template_decl.result));
6426 if (DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (t))
6427 WT (DECL_CHAIN (t));
6428 break;
6429
6430 case TEMPLATE_INFO:
6431 {
6432 WT (((lang_tree_node *)t)->template_info.tmpl);
6433 WT (((lang_tree_node *)t)->template_info.args);
6434 WT (((lang_tree_node *)t)->template_info.partial);
6435
6436 const auto *ac = (((lang_tree_node *)t)
6437 ->template_info.deferred_access_checks);
6438 unsigned len = vec_safe_length (v: ac);
6439 if (streaming_p ())
6440 u (v: len);
6441 if (len)
6442 {
6443 for (unsigned ix = 0; ix != len; ix++)
6444 {
6445 const auto &m = (*ac)[ix];
6446 WT (m.binfo);
6447 WT (m.decl);
6448 WT (m.diag_decl);
6449 if (state)
6450 state->write_location (*this, m.loc);
6451 }
6452 }
6453 }
6454 break;
6455
6456 case TEMPLATE_PARM_INDEX:
6457 if (streaming_p ())
6458 {
6459 WU (((lang_tree_node *)t)->tpi.index);
6460 WU (((lang_tree_node *)t)->tpi.level);
6461 WU (((lang_tree_node *)t)->tpi.orig_level);
6462 }
6463 WT (((lang_tree_node *)t)->tpi.decl);
6464 /* TEMPLATE_PARM_DESCENDANTS (AKA TREE_CHAIN) is an internal
6465 cache, do not stream. */
6466 break;
6467
6468 case TRAIT_EXPR:
6469 WT (((lang_tree_node *)t)->trait_expression.type1);
6470 WT (((lang_tree_node *)t)->trait_expression.type2);
6471 if (streaming_p ())
6472 WU (((lang_tree_node *)t)->trait_expression.kind);
6473 break;
6474 }
6475
6476 if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
6477 {
6478 /* We want to stream the type of a expression-like nodes /after/
6479 we've streamed the operands. The type often contains (bits
6480 of the) types of the operands, and with things like decltype
6481 and noexcept in play, we really want to stream the decls
6482 defining the type before we try and stream the type on its
6483 own. Otherwise we can find ourselves trying to read in a
6484 decl, when we're already partially reading in a component of
6485 its type. And that's bad. */
6486 tree type = t->typed.type;
6487 unsigned prec = 0;
6488
6489 switch (code)
6490 {
6491 default:
6492 break;
6493
6494 case TEMPLATE_DECL:
6495 /* We fill in the template's type separately. */
6496 type = NULL_TREE;
6497 break;
6498
6499 case TYPE_DECL:
6500 if (DECL_ORIGINAL_TYPE (t) && t == TYPE_NAME (type))
6501 /* This is a typedef. We set its type separately. */
6502 type = NULL_TREE;
6503 break;
6504
6505 case ENUMERAL_TYPE:
6506 if (type && !ENUM_FIXED_UNDERLYING_TYPE_P (t))
6507 {
6508 /* Type is a restricted range integer type derived from the
6509 integer_types. Find the right one. */
6510 prec = TYPE_PRECISION (type);
6511 tree name = DECL_NAME (TYPE_NAME (type));
6512
6513 for (unsigned itk = itk_none; itk--;)
6514 if (integer_types[itk]
6515 && DECL_NAME (TYPE_NAME (integer_types[itk])) == name)
6516 {
6517 type = integer_types[itk];
6518 break;
6519 }
6520 gcc_assert (type != t->typed.type);
6521 }
6522 break;
6523 }
6524
6525 WT (type);
6526 if (prec && streaming_p ())
6527 WU (prec);
6528 }
6529
6530 if (TREE_CODE (t) == CONSTRUCTOR)
6531 {
6532 unsigned len = vec_safe_length (v: t->constructor.elts);
6533 if (streaming_p ())
6534 WU (len);
6535 if (len)
6536 for (unsigned ix = 0; ix != len; ix++)
6537 {
6538 const constructor_elt &elt = (*t->constructor.elts)[ix];
6539
6540 WT (elt.index);
6541 WT (elt.value);
6542 }
6543 }
6544
6545#undef WT
6546#undef WU
6547}
6548
6549// Streaming in a reference to a decl can cause that decl to be
6550// TREE_USED, which is the mark_used behaviour we need most of the
6551// time. The trees_in::unused can be incremented to inhibit this,
6552// which is at least needed for vtables.
6553
6554bool
6555trees_in::core_vals (tree t)
6556{
6557#define RU(X) ((X) = u ())
6558#define RUC(T,X) ((X) = T (u ()))
6559#define RT(X) ((X) = tree_node ())
6560#define RTU(X) ((X) = tree_node (true))
6561 tree_code code = TREE_CODE (t);
6562
6563 /* First by tree shape. */
6564 if (CODE_CONTAINS_STRUCT (code, TS_DECL_MINIMAL))
6565 {
6566 RT (t->decl_minimal.name);
6567 if (!DECL_TEMPLATE_PARM_P (t))
6568 RT (t->decl_minimal.context);
6569
6570 /* Don't zap the locus just yet, we don't record it correctly
6571 and thus lose all location information. */
6572 t->decl_minimal.locus = state->read_location (*this);
6573 }
6574
6575 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_COMMON))
6576 {
6577 RT (t->type_common.name);
6578 RT (t->type_common.context);
6579
6580 RT (t->type_common.main_variant);
6581 RT (t->type_common.canonical);
6582
6583 /* type_common.next_variant is internally manipulated. */
6584 /* type_common.pointer_to, type_common.reference_to. */
6585
6586 RU (t->type_common.precision);
6587 RU (t->type_common.contains_placeholder_bits);
6588 RUC (machine_mode, t->type_common.mode);
6589 RU (t->type_common.align);
6590
6591 if (!RECORD_OR_UNION_CODE_P (code))
6592 {
6593 RT (t->type_common.size);
6594 RT (t->type_common.size_unit);
6595 }
6596 RT (t->type_common.attributes);
6597
6598 RT (t->type_common.common.chain); /* TYPE_STUB_DECL. */
6599 }
6600
6601 if (CODE_CONTAINS_STRUCT (code, TS_DECL_COMMON))
6602 {
6603 RUC (machine_mode, t->decl_common.mode);
6604 RU (t->decl_common.off_align);
6605 RU (t->decl_common.align);
6606
6607 if (code != TEMPLATE_DECL)
6608 {
6609 RT (t->decl_common.size);
6610 RT (t->decl_common.size_unit);
6611 }
6612
6613 RT (t->decl_common.attributes);
6614 RT (t->decl_common.abstract_origin);
6615 }
6616
6617 if (CODE_CONTAINS_STRUCT (code, TS_DECL_WITH_VIS))
6618 {
6619 RT (t->decl_with_vis.assembler_name);
6620 RUC (symbol_visibility, t->decl_with_vis.visibility);
6621 }
6622
6623 if (CODE_CONTAINS_STRUCT (code, TS_TYPE_NON_COMMON))
6624 {
6625 if (code == ENUMERAL_TYPE)
6626 {
6627 /* These fields get set even for opaque enums that lack a
6628 definition, so we stream them directly for each ENUMERAL_TYPE.
6629 We stream TYPE_VALUES as part of the definition. */
6630 RT (t->type_non_common.maxval);
6631 RT (t->type_non_common.minval);
6632 }
6633 /* Records and unions hold FIELDS, VFIELD & BINFO on these
6634 things. */
6635 else if (!RECORD_OR_UNION_CODE_P (code))
6636 {
6637 /* This is not clobbering TYPE_CACHED_VALUES, because this
6638 is a type that doesn't have any. */
6639 gcc_checking_assert (!TYPE_CACHED_VALUES_P (t));
6640 RT (t->type_non_common.values);
6641 RT (t->type_non_common.maxval);
6642 RT (t->type_non_common.minval);
6643 }
6644
6645 RT (t->type_non_common.lang_1);
6646 }
6647
6648 if (CODE_CONTAINS_STRUCT (code, TS_EXP))
6649 {
6650 t->exp.locus = state->read_location (*this);
6651
6652 bool vl = TREE_CODE_CLASS (code) == tcc_vl_exp;
6653 for (unsigned limit = (vl ? VL_EXP_OPERAND_LENGTH (t)
6654 : TREE_OPERAND_LENGTH (t)),
6655 ix = unsigned (vl); ix != limit; ix++)
6656 RTU (TREE_OPERAND (t, ix));
6657 }
6658
6659 /* Then by CODE. Special cases and/or 1:1 tree shape
6660 correspondance. */
6661 switch (code)
6662 {
6663 default:
6664 break;
6665
6666 case ARGUMENT_PACK_SELECT:
6667 case DEFERRED_PARSE:
6668 case IDENTIFIER_NODE:
6669 case BINDING_VECTOR:
6670 case SSA_NAME:
6671 case TRANSLATION_UNIT_DECL:
6672 case USERDEF_LITERAL:
6673 return false; /* Should never meet. */
6674
6675 /* Constants. */
6676 case COMPLEX_CST:
6677 RT (TREE_REALPART (t));
6678 RT (TREE_IMAGPART (t));
6679 break;
6680
6681 case FIXED_CST:
6682 /* Not suported in C++. */
6683 return false;
6684
6685 case INTEGER_CST:
6686 {
6687 unsigned num = TREE_INT_CST_EXT_NUNITS (t);
6688 for (unsigned ix = 0; ix != num; ix++)
6689 TREE_INT_CST_ELT (t, ix) = wu ();
6690 }
6691 break;
6692
6693 case POLY_INT_CST:
6694 for (unsigned ix = 0; ix != NUM_POLY_INT_COEFFS; ix++)
6695 RT (POLY_INT_CST_COEFF (t, ix));
6696 break;
6697
6698 case REAL_CST:
6699 if (const void *bytes = buf (len: sizeof (real_value)))
6700 memcpy (TREE_REAL_CST_PTR (t), src: bytes, n: sizeof (real_value));
6701 break;
6702
6703 case STRING_CST:
6704 /* Streamed during start. */
6705 break;
6706
6707 case VECTOR_CST:
6708 for (unsigned ix = vector_cst_encoded_nelts (t); ix--;)
6709 RT (VECTOR_CST_ENCODED_ELT (t, ix));
6710 break;
6711
6712 /* Decls. */
6713 case VAR_DECL:
6714 if (DECL_CONTEXT (t)
6715 && TREE_CODE (DECL_CONTEXT (t)) != FUNCTION_DECL)
6716 break;
6717 /* FALLTHROUGH */
6718
6719 case RESULT_DECL:
6720 case PARM_DECL:
6721 if (DECL_HAS_VALUE_EXPR_P (t))
6722 {
6723 /* The DECL_VALUE hash table is a cache, thus if we're
6724 reading a duplicate (which we end up discarding), the
6725 value expr will also be cleaned up at the next gc. */
6726 tree val = tree_node ();
6727 SET_DECL_VALUE_EXPR (t, val);
6728 }
6729 /* FALLTHROUGH */
6730
6731 case CONST_DECL:
6732 case IMPORTED_DECL:
6733 RT (t->decl_common.initial);
6734 break;
6735
6736 case FIELD_DECL:
6737 RT (t->field_decl.offset);
6738 RT (t->field_decl.bit_field_type);
6739 RT (t->field_decl.qualifier);
6740 RT (t->field_decl.bit_offset);
6741 RT (t->field_decl.fcontext);
6742 RT (t->decl_common.initial);
6743 break;
6744
6745 case LABEL_DECL:
6746 RU (t->label_decl.label_decl_uid);
6747 RU (t->label_decl.eh_landing_pad_nr);
6748 break;
6749
6750 case FUNCTION_DECL:
6751 {
6752 unsigned bltin = u ();
6753 t->function_decl.built_in_class = built_in_class (bltin);
6754 if (bltin != NOT_BUILT_IN)
6755 {
6756 bltin = u ();
6757 DECL_UNCHECKED_FUNCTION_CODE (t) = built_in_function (bltin);
6758 }
6759
6760 RT (t->function_decl.personality);
6761 RT (t->function_decl.function_specific_target);
6762 RT (t->function_decl.function_specific_optimization);
6763 RT (t->function_decl.vindex);
6764
6765 if (DECL_HAS_DEPENDENT_EXPLICIT_SPEC_P (t))
6766 {
6767 tree spec;
6768 RT (spec);
6769 store_explicit_specifier (t, spec);
6770 }
6771 }
6772 break;
6773
6774 case USING_DECL:
6775 /* USING_DECL_DECLS */
6776 RT (t->decl_common.initial);
6777 /* FALLTHROUGH */
6778
6779 case TYPE_DECL:
6780 /* USING_DECL: USING_DECL_SCOPE */
6781 /* TYPE_DECL: DECL_ORIGINAL_TYPE */
6782 RT (t->decl_non_common.result);
6783 break;
6784
6785 /* Miscellaneous common nodes. */
6786 case BLOCK:
6787 t->block.locus = state->read_location (*this);
6788 t->block.end_locus = state->read_location (*this);
6789
6790 for (tree *chain = &t->block.vars;;)
6791 if (tree decl = tree_node ())
6792 {
6793 /* For a deduplicated local type or enumerator, chain the
6794 duplicate decl instead of the canonical in-TU decl. Seeing
6795 a duplicate here means the containing function whose body
6796 we're streaming in is a duplicate too, so we'll end up
6797 discarding this BLOCK (and the rest of the duplicate function
6798 body) anyway. */
6799 decl = maybe_duplicate (decl);
6800
6801 if (!DECL_P (decl) || DECL_CHAIN (decl))
6802 {
6803 set_overrun ();
6804 break;
6805 }
6806 *chain = decl;
6807 chain = &DECL_CHAIN (decl);
6808 }
6809 else
6810 break;
6811
6812 /* nonlocalized_vars is middle-end. */
6813 RT (t->block.subblocks);
6814 RT (t->block.supercontext);
6815 RT (t->block.abstract_origin);
6816 /* fragment_origin, fragment_chain are middle-end. */
6817 RT (t->block.chain);
6818 /* nonlocalized_vars, block_num, die are middle endy/debug
6819 things. */
6820 break;
6821
6822 case CALL_EXPR:
6823 RUC (internal_fn, t->base.u.ifn);
6824 break;
6825
6826 case CONSTRUCTOR:
6827 // Streamed after the node's type.
6828 break;
6829
6830 case OMP_CLAUSE:
6831 {
6832 RU (t->omp_clause.subcode.map_kind);
6833 t->omp_clause.locus = state->read_location (*this);
6834
6835 unsigned len = omp_clause_num_ops[OMP_CLAUSE_CODE (t)];
6836 for (unsigned ix = 0; ix != len; ix++)
6837 RT (t->omp_clause.ops[ix]);
6838 }
6839 break;
6840
6841 case STATEMENT_LIST:
6842 {
6843 tree_stmt_iterator iter = tsi_start (t);
6844 for (tree stmt; RT (stmt);)
6845 tsi_link_after (&iter, stmt, TSI_CONTINUE_LINKING);
6846 }
6847 break;
6848
6849 case OPTIMIZATION_NODE:
6850 case TARGET_OPTION_NODE:
6851 /* Not yet implemented, see trees_out::core_vals. */
6852 gcc_unreachable ();
6853 break;
6854
6855 case TREE_BINFO:
6856 RT (t->binfo.common.chain);
6857 RT (t->binfo.offset);
6858 RT (t->binfo.inheritance);
6859 RT (t->binfo.vptr_field);
6860
6861 /* Do not mark the vtables as USED in the address expressions
6862 here. */
6863 unused++;
6864 RT (t->binfo.vtable);
6865 RT (t->binfo.virtuals);
6866 RT (t->binfo.vtt_subvtt);
6867 RT (t->binfo.vtt_vptr);
6868 unused--;
6869
6870 BINFO_BASE_ACCESSES (t) = tree_vec ();
6871 if (!get_overrun ())
6872 {
6873 unsigned num = vec_safe_length (BINFO_BASE_ACCESSES (t));
6874 for (unsigned ix = 0; ix != num; ix++)
6875 BINFO_BASE_APPEND (t, tree_node ());
6876 }
6877 break;
6878
6879 case TREE_LIST:
6880 RT (t->list.purpose);
6881 RT (t->list.value);
6882 RT (t->list.common.chain);
6883 break;
6884
6885 case TREE_VEC:
6886 for (unsigned ix = TREE_VEC_LENGTH (t); ix--;)
6887 RT (TREE_VEC_ELT (t, ix));
6888 RT (t->type_common.common.chain);
6889 break;
6890
6891 /* C++-specific nodes ... */
6892 case BASELINK:
6893 RT (((lang_tree_node *)t)->baselink.binfo);
6894 RTU (((lang_tree_node *)t)->baselink.functions);
6895 RT (((lang_tree_node *)t)->baselink.access_binfo);
6896 break;
6897
6898 case CONSTRAINT_INFO:
6899 RT (((lang_tree_node *)t)->constraint_info.template_reqs);
6900 RT (((lang_tree_node *)t)->constraint_info.declarator_reqs);
6901 RT (((lang_tree_node *)t)->constraint_info.associated_constr);
6902 break;
6903
6904 case DEFERRED_NOEXCEPT:
6905 RT (((lang_tree_node *)t)->deferred_noexcept.pattern);
6906 RT (((lang_tree_node *)t)->deferred_noexcept.args);
6907 break;
6908
6909 case LAMBDA_EXPR:
6910 RT (((lang_tree_node *)t)->lambda_expression.capture_list);
6911 RT (((lang_tree_node *)t)->lambda_expression.this_capture);
6912 RT (((lang_tree_node *)t)->lambda_expression.extra_scope);
6913 RT (((lang_tree_node *)t)->lambda_expression.regen_info);
6914 RT (((lang_tree_node *)t)->lambda_expression.extra_args);
6915 /* lambda_expression.pending_proxies is NULL */
6916 ((lang_tree_node *)t)->lambda_expression.locus
6917 = state->read_location (*this);
6918 RUC (cp_lambda_default_capture_mode_type,
6919 ((lang_tree_node *)t)->lambda_expression.default_capture_mode);
6920 RU (((lang_tree_node *)t)->lambda_expression.discriminator_scope);
6921 RU (((lang_tree_node *)t)->lambda_expression.discriminator_sig);
6922 break;
6923
6924 case OVERLOAD:
6925 RT (((lang_tree_node *)t)->overload.function);
6926 RT (t->common.chain);
6927 break;
6928
6929 case PTRMEM_CST:
6930 RT (((lang_tree_node *)t)->ptrmem.member);
6931 break;
6932
6933 case STATIC_ASSERT:
6934 RT (((lang_tree_node *)t)->static_assertion.condition);
6935 RT (((lang_tree_node *)t)->static_assertion.message);
6936 ((lang_tree_node *)t)->static_assertion.location
6937 = state->read_location (*this);
6938 break;
6939
6940 case TEMPLATE_DECL:
6941 /* Streamed when reading the raw template decl itself. */
6942 gcc_assert (((lang_tree_node *)t)->template_decl.arguments);
6943 gcc_assert (((lang_tree_node *)t)->template_decl.result);
6944 if (DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (t))
6945 RT (DECL_CHAIN (t));
6946 break;
6947
6948 case TEMPLATE_INFO:
6949 RT (((lang_tree_node *)t)->template_info.tmpl);
6950 RT (((lang_tree_node *)t)->template_info.args);
6951 RT (((lang_tree_node *)t)->template_info.partial);
6952 if (unsigned len = u ())
6953 {
6954 auto &ac = (((lang_tree_node *)t)
6955 ->template_info.deferred_access_checks);
6956 vec_alloc (v&: ac, nelems: len);
6957 for (unsigned ix = 0; ix != len; ix++)
6958 {
6959 deferred_access_check m;
6960
6961 RT (m.binfo);
6962 RT (m.decl);
6963 RT (m.diag_decl);
6964 m.loc = state->read_location (*this);
6965 ac->quick_push (obj: m);
6966 }
6967 }
6968 break;
6969
6970 case TEMPLATE_PARM_INDEX:
6971 RU (((lang_tree_node *)t)->tpi.index);
6972 RU (((lang_tree_node *)t)->tpi.level);
6973 RU (((lang_tree_node *)t)->tpi.orig_level);
6974 RT (((lang_tree_node *)t)->tpi.decl);
6975 break;
6976
6977 case TRAIT_EXPR:
6978 RT (((lang_tree_node *)t)->trait_expression.type1);
6979 RT (((lang_tree_node *)t)->trait_expression.type2);
6980 RUC (cp_trait_kind, ((lang_tree_node *)t)->trait_expression.kind);
6981 break;
6982 }
6983
6984 if (CODE_CONTAINS_STRUCT (code, TS_TYPED))
6985 {
6986 tree type = tree_node ();
6987
6988 if (type && code == ENUMERAL_TYPE && !ENUM_FIXED_UNDERLYING_TYPE_P (t))
6989 {
6990 unsigned precision = u ();
6991
6992 type = build_distinct_type_copy (type);
6993 TYPE_PRECISION (type) = precision;
6994 set_min_and_max_values_for_integral_type (type, precision,
6995 TYPE_SIGN (type));
6996 }
6997
6998 if (code != TEMPLATE_DECL)
6999 t->typed.type = type;
7000 }
7001
7002 if (TREE_CODE (t) == CONSTRUCTOR)
7003 if (unsigned len = u ())
7004 {
7005 vec_alloc (v&: t->constructor.elts, nelems: len);
7006 for (unsigned ix = 0; ix != len; ix++)
7007 {
7008 constructor_elt elt;
7009
7010 RT (elt.index);
7011 RTU (elt.value);
7012 t->constructor.elts->quick_push (obj: elt);
7013 }
7014 }
7015
7016#undef RT
7017#undef RM
7018#undef RU
7019 return !get_overrun ();
7020}
7021
7022void
7023trees_out::lang_decl_vals (tree t)
7024{
7025 const struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
7026#define WU(X) (u (X))
7027#define WT(X) (tree_node (X))
7028 /* Module index already written. */
7029 switch (lang->u.base.selector)
7030 {
7031 default:
7032 gcc_unreachable ();
7033
7034 case lds_fn: /* lang_decl_fn. */
7035 if (streaming_p ())
7036 {
7037 if (DECL_NAME (t) && IDENTIFIER_OVL_OP_P (DECL_NAME (t)))
7038 WU (lang->u.fn.ovl_op_code);
7039 }
7040
7041 if (DECL_CLASS_SCOPE_P (t))
7042 WT (lang->u.fn.context);
7043
7044 if (lang->u.fn.thunk_p)
7045 {
7046 /* The thunked-to function. */
7047 WT (lang->u.fn.befriending_classes);
7048 if (streaming_p ())
7049 wi (v: lang->u.fn.u5.fixed_offset);
7050 }
7051 else if (decl_tls_wrapper_p (t))
7052 /* The wrapped variable. */
7053 WT (lang->u.fn.befriending_classes);
7054 else
7055 WT (lang->u.fn.u5.cloned_function);
7056
7057 if (FNDECL_USED_AUTO (t))
7058 WT (lang->u.fn.u.saved_auto_return_type);
7059
7060 goto lds_min;
7061
7062 case lds_decomp: /* lang_decl_decomp. */
7063 WT (lang->u.decomp.base);
7064 goto lds_min;
7065
7066 case lds_min: /* lang_decl_min. */
7067 lds_min:
7068 WT (lang->u.min.template_info);
7069 {
7070 tree access = lang->u.min.access;
7071
7072 /* DECL_ACCESS needs to be maintained by the definition of the
7073 (derived) class that changes the access. The other users
7074 of DECL_ACCESS need to write it here. */
7075 if (!DECL_THUNK_P (t)
7076 && (DECL_CONTEXT (t) && TYPE_P (DECL_CONTEXT (t))))
7077 access = NULL_TREE;
7078
7079 WT (access);
7080 }
7081 break;
7082
7083 case lds_ns: /* lang_decl_ns. */
7084 break;
7085
7086 case lds_parm: /* lang_decl_parm. */
7087 if (streaming_p ())
7088 {
7089 WU (lang->u.parm.level);
7090 WU (lang->u.parm.index);
7091 }
7092 break;
7093 }
7094#undef WU
7095#undef WT
7096}
7097
7098bool
7099trees_in::lang_decl_vals (tree t)
7100{
7101 struct lang_decl *lang = DECL_LANG_SPECIFIC (t);
7102#define RU(X) ((X) = u ())
7103#define RT(X) ((X) = tree_node ())
7104
7105 /* Module index already read. */
7106 switch (lang->u.base.selector)
7107 {
7108 default:
7109 gcc_unreachable ();
7110
7111 case lds_fn: /* lang_decl_fn. */
7112 if (DECL_NAME (t) && IDENTIFIER_OVL_OP_P (DECL_NAME (t)))
7113 {
7114 unsigned code = u ();
7115
7116 /* Check consistency. */
7117 if (code >= OVL_OP_MAX
7118 || (ovl_op_info[IDENTIFIER_ASSIGN_OP_P (DECL_NAME (t))][code]
7119 .ovl_op_code) == OVL_OP_ERROR_MARK)
7120 set_overrun ();
7121 else
7122 lang->u.fn.ovl_op_code = code;
7123 }
7124
7125 if (DECL_CLASS_SCOPE_P (t))
7126 RT (lang->u.fn.context);
7127
7128 if (lang->u.fn.thunk_p)
7129 {
7130 RT (lang->u.fn.befriending_classes);
7131 lang->u.fn.u5.fixed_offset = wi ();
7132 }
7133 else if (decl_tls_wrapper_p (t))
7134 RT (lang->u.fn.befriending_classes);
7135 else
7136 RT (lang->u.fn.u5.cloned_function);
7137
7138 if (FNDECL_USED_AUTO (t))
7139 RT (lang->u.fn.u.saved_auto_return_type);
7140 goto lds_min;
7141
7142 case lds_decomp: /* lang_decl_decomp. */
7143 RT (lang->u.decomp.base);
7144 goto lds_min;
7145
7146 case lds_min: /* lang_decl_min. */
7147 lds_min:
7148 RT (lang->u.min.template_info);
7149 RT (lang->u.min.access);
7150 break;
7151
7152 case lds_ns: /* lang_decl_ns. */
7153 break;
7154
7155 case lds_parm: /* lang_decl_parm. */
7156 RU (lang->u.parm.level);
7157 RU (lang->u.parm.index);
7158 break;
7159 }
7160#undef RU
7161#undef RT
7162 return !get_overrun ();
7163}
7164
7165/* Most of the value contents of lang_type is streamed in
7166 define_class. */
7167
7168void
7169trees_out::lang_type_vals (tree t)
7170{
7171 const struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
7172#define WU(X) (u (X))
7173#define WT(X) (tree_node (X))
7174 if (streaming_p ())
7175 WU (lang->align);
7176#undef WU
7177#undef WT
7178}
7179
7180bool
7181trees_in::lang_type_vals (tree t)
7182{
7183 struct lang_type *lang = TYPE_LANG_SPECIFIC (t);
7184#define RU(X) ((X) = u ())
7185#define RT(X) ((X) = tree_node ())
7186 RU (lang->align);
7187#undef RU
7188#undef RT
7189 return !get_overrun ();
7190}
7191
7192/* Write out the bools of T, including information about any
7193 LANG_SPECIFIC information. Including allocation of any lang
7194 specific object. */
7195
7196void
7197trees_out::tree_node_bools (tree t)
7198{
7199 gcc_checking_assert (streaming_p ());
7200
7201 /* We should never stream a namespace. */
7202 gcc_checking_assert (TREE_CODE (t) != NAMESPACE_DECL
7203 || DECL_NAMESPACE_ALIAS (t));
7204
7205 bits_out bits = stream_bits ();
7206 core_bools (t, bits);
7207
7208 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7209 {
7210 case tcc_declaration:
7211 {
7212 bool specific = DECL_LANG_SPECIFIC (t) != NULL;
7213 bits.b (x: specific);
7214 if (specific && VAR_P (t))
7215 bits.b (DECL_DECOMPOSITION_P (t));
7216 if (specific)
7217 lang_decl_bools (t, bits);
7218 }
7219 break;
7220
7221 case tcc_type:
7222 {
7223 bool specific = (TYPE_MAIN_VARIANT (t) == t
7224 && TYPE_LANG_SPECIFIC (t) != NULL);
7225 gcc_assert (TYPE_LANG_SPECIFIC (t)
7226 == TYPE_LANG_SPECIFIC (TYPE_MAIN_VARIANT (t)));
7227
7228 bits.b (x: specific);
7229 if (specific)
7230 lang_type_bools (t, bits);
7231 }
7232 break;
7233
7234 default:
7235 break;
7236 }
7237
7238 bits.bflush ();
7239}
7240
7241bool
7242trees_in::tree_node_bools (tree t)
7243{
7244 bits_in bits = stream_bits ();
7245 bool ok = core_bools (t, bits);
7246
7247 if (ok)
7248 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7249 {
7250 case tcc_declaration:
7251 if (bits.b ())
7252 {
7253 bool decomp = VAR_P (t) && bits.b ();
7254
7255 ok = maybe_add_lang_decl_raw (t, decomp_p: decomp);
7256 if (ok)
7257 ok = lang_decl_bools (t, bits);
7258 }
7259 break;
7260
7261 case tcc_type:
7262 if (bits.b ())
7263 {
7264 ok = maybe_add_lang_type_raw (t);
7265 if (ok)
7266 ok = lang_type_bools (t, bits);
7267 }
7268 break;
7269
7270 default:
7271 break;
7272 }
7273
7274 bits.bflush ();
7275 if (!ok || get_overrun ())
7276 return false;
7277
7278 return true;
7279}
7280
7281
7282/* Write out the lang-specifc vals of node T. */
7283
7284void
7285trees_out::lang_vals (tree t)
7286{
7287 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7288 {
7289 case tcc_declaration:
7290 if (DECL_LANG_SPECIFIC (t))
7291 lang_decl_vals (t);
7292 break;
7293
7294 case tcc_type:
7295 if (TYPE_MAIN_VARIANT (t) == t && TYPE_LANG_SPECIFIC (t))
7296 lang_type_vals (t);
7297 break;
7298
7299 default:
7300 break;
7301 }
7302}
7303
7304bool
7305trees_in::lang_vals (tree t)
7306{
7307 bool ok = true;
7308
7309 switch (TREE_CODE_CLASS (TREE_CODE (t)))
7310 {
7311 case tcc_declaration:
7312 if (DECL_LANG_SPECIFIC (t))
7313 ok = lang_decl_vals (t);
7314 break;
7315
7316 case tcc_type:
7317 if (TYPE_LANG_SPECIFIC (t))
7318 ok = lang_type_vals (t);
7319 else
7320 TYPE_LANG_SPECIFIC (t) = TYPE_LANG_SPECIFIC (TYPE_MAIN_VARIANT (t));
7321 break;
7322
7323 default:
7324 break;
7325 }
7326
7327 return ok;
7328}
7329
7330/* Write out the value fields of node T. */
7331
7332void
7333trees_out::tree_node_vals (tree t)
7334{
7335 core_vals (t);
7336 lang_vals (t);
7337}
7338
7339bool
7340trees_in::tree_node_vals (tree t)
7341{
7342 bool ok = core_vals (t);
7343 if (ok)
7344 ok = lang_vals (t);
7345
7346 return ok;
7347}
7348
7349
7350/* If T is a back reference, fixed reference or NULL, write out its
7351 code and return WK_none. Otherwise return WK_value if we must write
7352 by value, or WK_normal otherwise. */
7353
7354walk_kind
7355trees_out::ref_node (tree t)
7356{
7357 if (!t)
7358 {
7359 if (streaming_p ())
7360 {
7361 /* NULL_TREE -> tt_null. */
7362 null_count++;
7363 i (v: tt_null);
7364 }
7365 return WK_none;
7366 }
7367
7368 if (!TREE_VISITED (t))
7369 return WK_normal;
7370
7371 /* An already-visited tree. It must be in the map. */
7372 int val = get_tag (t);
7373
7374 if (val == tag_value)
7375 /* An entry we should walk into. */
7376 return WK_value;
7377
7378 const char *kind;
7379
7380 if (val <= tag_backref)
7381 {
7382 /* Back reference -> -ve number */
7383 if (streaming_p ())
7384 i (v: val);
7385 kind = "backref";
7386 }
7387 else if (val >= tag_fixed)
7388 {
7389 /* Fixed reference -> tt_fixed */
7390 val -= tag_fixed;
7391 if (streaming_p ())
7392 i (v: tt_fixed), u (v: val);
7393 kind = "fixed";
7394 }
7395
7396 if (streaming_p ())
7397 {
7398 back_ref_count++;
7399 dump (dumper::TREE)
7400 && dump ("Wrote %s:%d %C:%N%S", kind, val, TREE_CODE (t), t, t);
7401 }
7402 return WK_none;
7403}
7404
7405tree
7406trees_in::back_ref (int tag)
7407{
7408 tree res = NULL_TREE;
7409
7410 if (tag < 0 && unsigned (~tag) < back_refs.length ())
7411 res = back_refs[~tag];
7412
7413 if (!res
7414 /* Checking TREE_CODE is a dereference, so we know this is not a
7415 wild pointer. Checking the code provides evidence we've not
7416 corrupted something. */
7417 || TREE_CODE (res) >= MAX_TREE_CODES)
7418 set_overrun ();
7419 else
7420 dump (dumper::TREE) && dump ("Read backref:%d found %C:%N%S", tag,
7421 TREE_CODE (res), res, res);
7422 return res;
7423}
7424
7425unsigned
7426trees_out::add_indirect_tpl_parms (tree parms)
7427{
7428 unsigned len = 0;
7429 for (; parms; parms = TREE_CHAIN (parms), len++)
7430 {
7431 if (TREE_VISITED (parms))
7432 break;
7433
7434 int tag = insert (t: parms);
7435 if (streaming_p ())
7436 dump (dumper::TREE)
7437 && dump ("Indirect:%d template's parameter %u %C:%N",
7438 tag, len, TREE_CODE (parms), parms);
7439 }
7440
7441 if (streaming_p ())
7442 u (v: len);
7443
7444 return len;
7445}
7446
7447unsigned
7448trees_in::add_indirect_tpl_parms (tree parms)
7449{
7450 unsigned len = u ();
7451 for (unsigned ix = 0; ix != len; parms = TREE_CHAIN (parms), ix++)
7452 {
7453 int tag = insert (t: parms);
7454 dump (dumper::TREE)
7455 && dump ("Indirect:%d template's parameter %u %C:%N",
7456 tag, ix, TREE_CODE (parms), parms);
7457 }
7458
7459 return len;
7460}
7461
7462/* We've just found DECL by name. Insert nodes that come with it, but
7463 cannot be found by name, so we'll not accidentally walk into them. */
7464
7465void
7466trees_out::add_indirects (tree decl)
7467{
7468 unsigned count = 0;
7469
7470 // FIXME:OPTIMIZATION We'll eventually want default fn parms of
7471 // templates and perhaps default template parms too. The former can
7472 // be referenced from instantiations (as they are lazily
7473 // instantiated). Also (deferred?) exception specifications of
7474 // templates. See the note about PARM_DECLs in trees_out::decl_node.
7475 tree inner = decl;
7476 if (TREE_CODE (decl) == TEMPLATE_DECL)
7477 {
7478 count += add_indirect_tpl_parms (DECL_TEMPLATE_PARMS (decl));
7479
7480 inner = DECL_TEMPLATE_RESULT (decl);
7481 int tag = insert (t: inner);
7482 if (streaming_p ())
7483 dump (dumper::TREE)
7484 && dump ("Indirect:%d template's result %C:%N",
7485 tag, TREE_CODE (inner), inner);
7486 count++;
7487 }
7488
7489 if (TREE_CODE (inner) == TYPE_DECL)
7490 {
7491 /* Make sure the type is in the map too. Otherwise we get
7492 different RECORD_TYPEs for the same type, and things go
7493 south. */
7494 tree type = TREE_TYPE (inner);
7495 gcc_checking_assert (DECL_ORIGINAL_TYPE (inner)
7496 || TYPE_NAME (type) == inner);
7497 int tag = insert (t: type);
7498 if (streaming_p ())
7499 dump (dumper::TREE) && dump ("Indirect:%d decl's type %C:%N", tag,
7500 TREE_CODE (type), type);
7501 count++;
7502 }
7503
7504 if (streaming_p ())
7505 {
7506 u (v: count);
7507 dump (dumper::TREE) && dump ("Inserted %u indirects", count);
7508 }
7509}
7510
7511bool
7512trees_in::add_indirects (tree decl)
7513{
7514 unsigned count = 0;
7515
7516 tree inner = decl;
7517 if (TREE_CODE (inner) == TEMPLATE_DECL)
7518 {
7519 count += add_indirect_tpl_parms (DECL_TEMPLATE_PARMS (decl));
7520
7521 inner = DECL_TEMPLATE_RESULT (decl);
7522 int tag = insert (t: inner);
7523 dump (dumper::TREE)
7524 && dump ("Indirect:%d templates's result %C:%N", tag,
7525 TREE_CODE (inner), inner);
7526 count++;
7527 }
7528
7529 if (TREE_CODE (inner) == TYPE_DECL)
7530 {
7531 tree type = TREE_TYPE (inner);
7532 gcc_checking_assert (DECL_ORIGINAL_TYPE (inner)
7533 || TYPE_NAME (type) == inner);
7534 int tag = insert (t: type);
7535 dump (dumper::TREE)
7536 && dump ("Indirect:%d decl's type %C:%N", tag, TREE_CODE (type), type);
7537 count++;
7538 }
7539
7540 dump (dumper::TREE) && dump ("Inserted %u indirects", count);
7541 return count == u ();
7542}
7543
7544/* Stream a template parameter. There are 4.5 kinds of parameter:
7545 a) Template - TEMPLATE_DECL->TYPE_DECL->TEMPLATE_TEMPLATE_PARM
7546 TEMPLATE_TYPE_PARM_INDEX TPI
7547 b) Type - TYPE_DECL->TEMPLATE_TYPE_PARM TEMPLATE_TYPE_PARM_INDEX TPI
7548 c.1) NonTYPE - PARM_DECL DECL_INITIAL TPI We meet this first
7549 c.2) NonTYPE - CONST_DECL DECL_INITIAL Same TPI
7550 d) BoundTemplate - TYPE_DECL->BOUND_TEMPLATE_TEMPLATE_PARM
7551 TEMPLATE_TYPE_PARM_INDEX->TPI
7552 TEMPLATE_TEMPLATE_PARM_INFO->TEMPLATE_INFO
7553
7554 All of these point to a TEMPLATE_PARM_INDEX, and #B also has a TEMPLATE_INFO
7555*/
7556
7557void
7558trees_out::tpl_parm_value (tree parm)
7559{
7560 gcc_checking_assert (DECL_P (parm) && DECL_TEMPLATE_PARM_P (parm));
7561
7562 int parm_tag = insert (t: parm);
7563 if (streaming_p ())
7564 {
7565 i (v: tt_tpl_parm);
7566 dump (dumper::TREE) && dump ("Writing template parm:%d %C:%N",
7567 parm_tag, TREE_CODE (parm), parm);
7568 start (t: parm);
7569 tree_node_bools (t: parm);
7570 }
7571
7572 tree inner = parm;
7573 if (TREE_CODE (inner) == TEMPLATE_DECL)
7574 {
7575 inner = DECL_TEMPLATE_RESULT (inner);
7576 int inner_tag = insert (t: inner);
7577 if (streaming_p ())
7578 {
7579 dump (dumper::TREE) && dump ("Writing inner template parm:%d %C:%N",
7580 inner_tag, TREE_CODE (inner), inner);
7581 start (t: inner);
7582 tree_node_bools (t: inner);
7583 }
7584 }
7585
7586 tree type = NULL_TREE;
7587 if (TREE_CODE (inner) == TYPE_DECL)
7588 {
7589 type = TREE_TYPE (inner);
7590 int type_tag = insert (t: type);
7591 if (streaming_p ())
7592 {
7593 dump (dumper::TREE) && dump ("Writing template parm type:%d %C:%N",
7594 type_tag, TREE_CODE (type), type);
7595 start (t: type);
7596 tree_node_bools (t: type);
7597 }
7598 }
7599
7600 if (inner != parm)
7601 {
7602 /* This is a template-template parameter. */
7603 unsigned tpl_levels = 0;
7604 tpl_header (decl: parm, tpl_levels: &tpl_levels);
7605 tpl_parms_fini (decl: parm, tpl_levels);
7606 }
7607
7608 tree_node_vals (t: parm);
7609 if (inner != parm)
7610 tree_node_vals (t: inner);
7611 if (type)
7612 {
7613 tree_node_vals (t: type);
7614 if (DECL_NAME (inner) == auto_identifier
7615 || DECL_NAME (inner) == decltype_auto_identifier)
7616 {
7617 /* Placeholder auto. */
7618 tree_node (DECL_INITIAL (inner));
7619 tree_node (DECL_SIZE_UNIT (inner));
7620 }
7621 }
7622
7623 if (streaming_p ())
7624 dump (dumper::TREE) && dump ("Wrote template parm:%d %C:%N",
7625 parm_tag, TREE_CODE (parm), parm);
7626}
7627
7628tree
7629trees_in::tpl_parm_value ()
7630{
7631 tree parm = start ();
7632 if (!parm || !tree_node_bools (t: parm))
7633 return NULL_TREE;
7634
7635 int parm_tag = insert (t: parm);
7636 dump (dumper::TREE) && dump ("Reading template parm:%d %C:%N",
7637 parm_tag, TREE_CODE (parm), parm);
7638
7639 tree inner = parm;
7640 if (TREE_CODE (inner) == TEMPLATE_DECL)
7641 {
7642 inner = start ();
7643 if (!inner || !tree_node_bools (t: inner))
7644 return NULL_TREE;
7645 int inner_tag = insert (t: inner);
7646 dump (dumper::TREE) && dump ("Reading inner template parm:%d %C:%N",
7647 inner_tag, TREE_CODE (inner), inner);
7648 DECL_TEMPLATE_RESULT (parm) = inner;
7649 }
7650
7651 tree type = NULL_TREE;
7652 if (TREE_CODE (inner) == TYPE_DECL)
7653 {
7654 type = start ();
7655 if (!type || !tree_node_bools (t: type))
7656 return NULL_TREE;
7657 int type_tag = insert (t: type);
7658 dump (dumper::TREE) && dump ("Reading template parm type:%d %C:%N",
7659 type_tag, TREE_CODE (type), type);
7660
7661 TREE_TYPE (inner) = TREE_TYPE (parm) = type;
7662 TYPE_NAME (type) = parm;
7663 }
7664
7665 if (inner != parm)
7666 {
7667 /* A template template parameter. */
7668 unsigned tpl_levels = 0;
7669 tpl_header (decl: parm, tpl_levels: &tpl_levels);
7670 tpl_parms_fini (decl: parm, tpl_levels);
7671 }
7672
7673 tree_node_vals (t: parm);
7674 if (inner != parm)
7675 tree_node_vals (t: inner);
7676 if (type)
7677 {
7678 tree_node_vals (t: type);
7679 if (DECL_NAME (inner) == auto_identifier
7680 || DECL_NAME (inner) == decltype_auto_identifier)
7681 {
7682 /* Placeholder auto. */
7683 DECL_INITIAL (inner) = tree_node ();
7684 DECL_SIZE_UNIT (inner) = tree_node ();
7685 }
7686 if (TYPE_CANONICAL (type))
7687 {
7688 gcc_checking_assert (TYPE_CANONICAL (type) == type);
7689 TYPE_CANONICAL (type) = canonical_type_parameter (type);
7690 }
7691 }
7692
7693 dump (dumper::TREE) && dump ("Read template parm:%d %C:%N",
7694 parm_tag, TREE_CODE (parm), parm);
7695
7696 return parm;
7697}
7698
7699void
7700trees_out::install_entity (tree decl, depset *dep)
7701{
7702 gcc_checking_assert (streaming_p ());
7703
7704 /* Write the entity index, so we can insert it as soon as we
7705 know this is new. */
7706 u (v: dep ? dep->cluster + 1 : 0);
7707 if (CHECKING_P && dep)
7708 {
7709 /* Add it to the entity map, such that we can tell it is
7710 part of us. */
7711 bool existed;
7712 unsigned *slot = &entity_map->get_or_insert
7713 (DECL_UID (decl), existed: &existed);
7714 if (existed)
7715 /* If it existed, it should match. */
7716 gcc_checking_assert (decl == (*entity_ary)[*slot]);
7717 *slot = ~dep->cluster;
7718 }
7719}
7720
7721bool
7722trees_in::install_entity (tree decl)
7723{
7724 unsigned entity_index = u ();
7725 if (!entity_index)
7726 return false;
7727
7728 if (entity_index > state->entity_num)
7729 {
7730 set_overrun ();
7731 return false;
7732 }
7733
7734 /* Insert the real decl into the entity ary. */
7735 unsigned ident = state->entity_lwm + entity_index - 1;
7736 (*entity_ary)[ident] = decl;
7737
7738 /* And into the entity map, if it's not already there. */
7739 tree not_tmpl = STRIP_TEMPLATE (decl);
7740 if (!DECL_LANG_SPECIFIC (not_tmpl)
7741 || !DECL_MODULE_ENTITY_P (not_tmpl))
7742 {
7743 retrofit_lang_decl (not_tmpl);
7744 DECL_MODULE_ENTITY_P (not_tmpl) = true;
7745
7746 /* Insert into the entity hash (it cannot already be there). */
7747 bool existed;
7748 unsigned &slot = entity_map->get_or_insert (DECL_UID (decl), existed: &existed);
7749 gcc_checking_assert (!existed);
7750 slot = ident;
7751 }
7752 else if (state->is_partition ())
7753 {
7754 /* The decl is already in the entity map, but we see it again now from a
7755 partition: we want to overwrite if the original decl wasn't also from
7756 a (possibly different) partition. Otherwise, for things like template
7757 instantiations, make_dependency might not realise that this is also
7758 provided from a partition and should be considered part of this module
7759 (and thus always emitted into the primary interface's CMI). */
7760 unsigned *slot = entity_map->get (DECL_UID (decl));
7761 module_state *imp = import_entity_module (index: *slot);
7762 if (!imp->is_partition ())
7763 *slot = ident;
7764 }
7765
7766 return true;
7767}
7768
7769static bool has_definition (tree decl);
7770
7771/* DECL is a decl node that must be written by value. DEP is the
7772 decl's depset. */
7773
7774void
7775trees_out::decl_value (tree decl, depset *dep)
7776{
7777 /* We should not be writing clones or template parms. */
7778 gcc_checking_assert (DECL_P (decl)
7779 && !DECL_CLONED_FUNCTION_P (decl)
7780 && !DECL_TEMPLATE_PARM_P (decl));
7781
7782 /* We should never be writing non-typedef ptrmemfuncs by value. */
7783 gcc_checking_assert (TREE_CODE (decl) != TYPE_DECL
7784 || DECL_ORIGINAL_TYPE (decl)
7785 || !TYPE_PTRMEMFUNC_P (TREE_TYPE (decl)));
7786
7787 merge_kind mk = get_merge_kind (decl, maybe_dep: dep);
7788
7789 if (CHECKING_P)
7790 {
7791 /* Never start in the middle of a template. */
7792 int use_tpl = -1;
7793 if (tree ti = node_template_info (decl, use&: use_tpl))
7794 gcc_checking_assert (TREE_CODE (TI_TEMPLATE (ti)) == OVERLOAD
7795 || TREE_CODE (TI_TEMPLATE (ti)) == FIELD_DECL
7796 || (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti))
7797 != decl));
7798 }
7799
7800 if (streaming_p ())
7801 {
7802 /* A new node -> tt_decl. */
7803 decl_val_count++;
7804 i (v: tt_decl);
7805 u (v: mk);
7806 start (t: decl);
7807
7808 if (mk != MK_unique)
7809 {
7810 bits_out bits = stream_bits ();
7811 if (!(mk & MK_template_mask) && !state->is_header ())
7812 {
7813 /* Tell the importer whether this is a global module entity,
7814 or a module entity. */
7815 tree o = get_originating_module_decl (decl);
7816 bool is_attached = false;
7817
7818 tree not_tmpl = STRIP_TEMPLATE (o);
7819 if (DECL_LANG_SPECIFIC (not_tmpl)
7820 && DECL_MODULE_ATTACH_P (not_tmpl))
7821 is_attached = true;
7822
7823 bits.b (x: is_attached);
7824 }
7825 bits.b (x: dep && dep->has_defn ());
7826 }
7827 tree_node_bools (t: decl);
7828 }
7829
7830 int tag = insert (t: decl, walk: WK_value);
7831 if (streaming_p ())
7832 dump (dumper::TREE)
7833 && dump ("Writing %s:%d %C:%N%S", merge_kind_name[mk], tag,
7834 TREE_CODE (decl), decl, decl);
7835
7836 tree inner = decl;
7837 int inner_tag = 0;
7838 if (TREE_CODE (decl) == TEMPLATE_DECL)
7839 {
7840 inner = DECL_TEMPLATE_RESULT (decl);
7841 inner_tag = insert (t: inner, walk: WK_value);
7842
7843 if (streaming_p ())
7844 {
7845 int code = TREE_CODE (inner);
7846 u (v: code);
7847 start (t: inner, code_streamed: true);
7848 tree_node_bools (t: inner);
7849 dump (dumper::TREE)
7850 && dump ("Writing %s:%d %C:%N%S", merge_kind_name[mk], inner_tag,
7851 TREE_CODE (inner), inner, inner);
7852 }
7853 }
7854
7855 tree type = NULL_TREE;
7856 int type_tag = 0;
7857 tree stub_decl = NULL_TREE;
7858 int stub_tag = 0;
7859 if (TREE_CODE (inner) == TYPE_DECL)
7860 {
7861 type = TREE_TYPE (inner);
7862 bool has_type = (type == TYPE_MAIN_VARIANT (type)
7863 && TYPE_NAME (type) == inner);
7864
7865 if (streaming_p ())
7866 u (v: has_type ? TREE_CODE (type) : 0);
7867
7868 if (has_type)
7869 {
7870 type_tag = insert (t: type, walk: WK_value);
7871 if (streaming_p ())
7872 {
7873 start (t: type, code_streamed: true);
7874 tree_node_bools (t: type);
7875 dump (dumper::TREE)
7876 && dump ("Writing type:%d %C:%N", type_tag,
7877 TREE_CODE (type), type);
7878 }
7879
7880 stub_decl = TYPE_STUB_DECL (type);
7881 bool has_stub = inner != stub_decl;
7882 if (streaming_p ())
7883 u (v: has_stub ? TREE_CODE (stub_decl) : 0);
7884 if (has_stub)
7885 {
7886 stub_tag = insert (t: stub_decl);
7887 if (streaming_p ())
7888 {
7889 start (t: stub_decl, code_streamed: true);
7890 tree_node_bools (t: stub_decl);
7891 dump (dumper::TREE)
7892 && dump ("Writing stub_decl:%d %C:%N", stub_tag,
7893 TREE_CODE (stub_decl), stub_decl);
7894 }
7895 }
7896 else
7897 stub_decl = NULL_TREE;
7898 }
7899 else
7900 /* Regular typedef. */
7901 type = NULL_TREE;
7902 }
7903
7904 /* Stream the container, we want it correctly canonicalized before
7905 we start emitting keys for this decl. */
7906 tree container = decl_container (decl);
7907
7908 unsigned tpl_levels = 0;
7909 if (decl != inner)
7910 tpl_header (decl, tpl_levels: &tpl_levels);
7911 if (TREE_CODE (inner) == FUNCTION_DECL)
7912 fn_parms_init (inner);
7913
7914 /* Now write out the merging information, and then really
7915 install the tag values. */
7916 key_mergeable (tag, mk, decl, inner, container, maybe_dep: dep);
7917
7918 if (streaming_p ())
7919 dump (dumper::MERGE)
7920 && dump ("Wrote:%d's %s merge key %C:%N", tag,
7921 merge_kind_name[mk], TREE_CODE (decl), decl);
7922
7923 if (TREE_CODE (inner) == FUNCTION_DECL)
7924 fn_parms_fini (inner);
7925
7926 if (!is_key_order ())
7927 tree_node_vals (t: decl);
7928
7929 if (inner_tag)
7930 {
7931 if (!is_key_order ())
7932 tree_node_vals (t: inner);
7933 tpl_parms_fini (decl, tpl_levels);
7934 }
7935
7936 if (type && !is_key_order ())
7937 {
7938 tree_node_vals (t: type);
7939 if (stub_decl)
7940 tree_node_vals (t: stub_decl);
7941 }
7942
7943 if (!is_key_order ())
7944 {
7945 if (mk & MK_template_mask
7946 || mk == MK_partial
7947 || mk == MK_friend_spec)
7948 {
7949 if (mk != MK_partial)
7950 {
7951 // FIXME: We should make use of the merge-key by
7952 // exposing it outside of key_mergeable. But this gets
7953 // the job done.
7954 auto *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
7955
7956 if (streaming_p ())
7957 u (v: get_mergeable_specialization_flags (tmpl: entry->tmpl, spec: decl));
7958 tree_node (entry->tmpl);
7959 tree_node (entry->args);
7960 }
7961 else
7962 {
7963 tree ti = get_template_info (inner);
7964 tree_node (TI_TEMPLATE (ti));
7965 tree_node (TI_ARGS (ti));
7966 }
7967 }
7968 tree_node (get_constraints (decl));
7969 }
7970
7971 if (streaming_p ())
7972 {
7973 /* Do not stray outside this section. */
7974 gcc_checking_assert (!dep || dep->section == dep_hash->section);
7975
7976 /* Write the entity index, so we can insert it as soon as we
7977 know this is new. */
7978 install_entity (decl, dep);
7979 }
7980
7981 if (DECL_LANG_SPECIFIC (inner)
7982 && DECL_MODULE_KEYED_DECLS_P (inner)
7983 && !is_key_order ())
7984 {
7985 /* Stream the keyed entities. */
7986 auto *attach_vec = keyed_table->get (k: inner);
7987 unsigned num = attach_vec->length ();
7988 if (streaming_p ())
7989 u (v: num);
7990 for (unsigned ix = 0; ix != num; ix++)
7991 {
7992 tree attached = (*attach_vec)[ix];
7993 tree_node (attached);
7994 if (streaming_p ())
7995 dump (dumper::MERGE)
7996 && dump ("Written %d[%u] attached decl %N", tag, ix, attached);
7997 }
7998 }
7999
8000 bool is_typedef = false;
8001 if (!type && TREE_CODE (inner) == TYPE_DECL)
8002 {
8003 tree t = TREE_TYPE (inner);
8004 unsigned tdef_flags = 0;
8005 if (DECL_ORIGINAL_TYPE (inner)
8006 && TYPE_NAME (TREE_TYPE (inner)) == inner)
8007 {
8008 tdef_flags |= 1;
8009 if (TYPE_STRUCTURAL_EQUALITY_P (t)
8010 && TYPE_DEPENDENT_P_VALID (t)
8011 && TYPE_DEPENDENT_P (t))
8012 tdef_flags |= 2;
8013 }
8014 if (streaming_p ())
8015 u (v: tdef_flags);
8016
8017 if (tdef_flags & 1)
8018 {
8019 /* A typedef type. */
8020 int type_tag = insert (t);
8021 if (streaming_p ())
8022 dump (dumper::TREE)
8023 && dump ("Cloned:%d %s %C:%N", type_tag,
8024 tdef_flags & 2 ? "depalias" : "typedef",
8025 TREE_CODE (t), t);
8026
8027 is_typedef = true;
8028 }
8029 }
8030
8031 if (streaming_p () && DECL_MAYBE_IN_CHARGE_CDTOR_P (decl))
8032 {
8033 bool cloned_p
8034 = (DECL_CHAIN (decl) && DECL_CLONED_FUNCTION_P (DECL_CHAIN (decl)));
8035 bool needs_vtt_parm_p
8036 = (cloned_p && CLASSTYPE_VBASECLASSES (DECL_CONTEXT (decl)));
8037 bool omit_inherited_parms_p
8038 = (cloned_p && DECL_MAYBE_IN_CHARGE_CONSTRUCTOR_P (decl)
8039 && base_ctor_omit_inherited_parms (decl));
8040 unsigned flags = (int (cloned_p) << 0
8041 | int (needs_vtt_parm_p) << 1
8042 | int (omit_inherited_parms_p) << 2);
8043 u (v: flags);
8044 dump (dumper::TREE) && dump ("CDTOR %N is %scloned",
8045 decl, cloned_p ? "" : "not ");
8046 }
8047
8048 if (streaming_p () && VAR_P (decl) && CP_DECL_THREAD_LOCAL_P (decl))
8049 u (v: decl_tls_model (decl));
8050
8051 if (streaming_p ())
8052 dump (dumper::TREE) && dump ("Written decl:%d %C:%N", tag,
8053 TREE_CODE (decl), decl);
8054
8055 if (NAMESPACE_SCOPE_P (inner))
8056 gcc_checking_assert (!dep == (VAR_OR_FUNCTION_DECL_P (inner)
8057 && DECL_LOCAL_DECL_P (inner)));
8058 else if ((TREE_CODE (inner) == TYPE_DECL
8059 && !is_typedef
8060 && TYPE_NAME (TREE_TYPE (inner)) == inner)
8061 || TREE_CODE (inner) == FUNCTION_DECL)
8062 {
8063 bool write_defn = !dep && has_definition (decl);
8064 if (streaming_p ())
8065 u (v: write_defn);
8066 if (write_defn)
8067 write_definition (decl);
8068 }
8069}
8070
8071tree
8072trees_in::decl_value ()
8073{
8074 int tag = 0;
8075 bool is_attached = false;
8076 bool has_defn = false;
8077 unsigned mk_u = u ();
8078 if (mk_u >= MK_hwm || !merge_kind_name[mk_u])
8079 {
8080 set_overrun ();
8081 return NULL_TREE;
8082 }
8083
8084 unsigned saved_unused = unused;
8085 unused = 0;
8086
8087 merge_kind mk = merge_kind (mk_u);
8088
8089 tree decl = start ();
8090 if (decl)
8091 {
8092 if (mk != MK_unique)
8093 {
8094 bits_in bits = stream_bits ();
8095 if (!(mk & MK_template_mask) && !state->is_header ())
8096 is_attached = bits.b ();
8097
8098 has_defn = bits.b ();
8099 }
8100
8101 if (!tree_node_bools (t: decl))
8102 decl = NULL_TREE;
8103 }
8104
8105 /* Insert into map. */
8106 tag = insert (t: decl);
8107 if (decl)
8108 dump (dumper::TREE)
8109 && dump ("Reading:%d %C", tag, TREE_CODE (decl));
8110
8111 tree inner = decl;
8112 int inner_tag = 0;
8113 if (decl && TREE_CODE (decl) == TEMPLATE_DECL)
8114 {
8115 int code = u ();
8116 inner = start (code);
8117 if (inner && tree_node_bools (t: inner))
8118 DECL_TEMPLATE_RESULT (decl) = inner;
8119 else
8120 decl = NULL_TREE;
8121
8122 inner_tag = insert (t: inner);
8123 if (decl)
8124 dump (dumper::TREE)
8125 && dump ("Reading:%d %C", inner_tag, TREE_CODE (inner));
8126 }
8127
8128 tree type = NULL_TREE;
8129 int type_tag = 0;
8130 tree stub_decl = NULL_TREE;
8131 int stub_tag = 0;
8132 if (decl && TREE_CODE (inner) == TYPE_DECL)
8133 {
8134 if (unsigned type_code = u ())
8135 {
8136 type = start (code: type_code);
8137 if (type && tree_node_bools (t: type))
8138 {
8139 TREE_TYPE (inner) = type;
8140 TYPE_NAME (type) = inner;
8141 }
8142 else
8143 decl = NULL_TREE;
8144
8145 type_tag = insert (t: type);
8146 if (decl)
8147 dump (dumper::TREE)
8148 && dump ("Reading type:%d %C", type_tag, TREE_CODE (type));
8149
8150 if (unsigned stub_code = u ())
8151 {
8152 stub_decl = start (code: stub_code);
8153 if (stub_decl && tree_node_bools (t: stub_decl))
8154 {
8155 TREE_TYPE (stub_decl) = type;
8156 TYPE_STUB_DECL (type) = stub_decl;
8157 }
8158 else
8159 decl = NULL_TREE;
8160
8161 stub_tag = insert (t: stub_decl);
8162 if (decl)
8163 dump (dumper::TREE)
8164 && dump ("Reading stub_decl:%d %C", stub_tag,
8165 TREE_CODE (stub_decl));
8166 }
8167 }
8168 }
8169
8170 if (!decl)
8171 {
8172 bail:
8173 if (inner_tag != 0)
8174 back_refs[~inner_tag] = NULL_TREE;
8175 if (type_tag != 0)
8176 back_refs[~type_tag] = NULL_TREE;
8177 if (stub_tag != 0)
8178 back_refs[~stub_tag] = NULL_TREE;
8179 if (tag != 0)
8180 back_refs[~tag] = NULL_TREE;
8181 set_overrun ();
8182 /* Bail. */
8183 unused = saved_unused;
8184 return NULL_TREE;
8185 }
8186
8187 /* Read the container, to ensure it's already been streamed in. */
8188 tree container = decl_container ();
8189 unsigned tpl_levels = 0;
8190
8191 /* Figure out if this decl is already known about. */
8192 int parm_tag = 0;
8193
8194 if (decl != inner)
8195 if (!tpl_header (decl, tpl_levels: &tpl_levels))
8196 goto bail;
8197 if (TREE_CODE (inner) == FUNCTION_DECL)
8198 parm_tag = fn_parms_init (inner);
8199
8200 tree existing = key_mergeable (tag, mk, decl, inner, type, container,
8201 is_attached);
8202 tree existing_inner = existing;
8203 if (existing)
8204 {
8205 if (existing == error_mark_node)
8206 goto bail;
8207
8208 if (TREE_CODE (STRIP_TEMPLATE (existing)) == TYPE_DECL)
8209 {
8210 tree etype = TREE_TYPE (existing);
8211 if (TYPE_LANG_SPECIFIC (etype)
8212 && COMPLETE_TYPE_P (etype)
8213 && !CLASSTYPE_MEMBER_VEC (etype))
8214 /* Give it a member vec, we're likely gonna be looking
8215 inside it. */
8216 set_class_bindings (etype, extra: -1);
8217 }
8218
8219 /* Install the existing decl into the back ref array. */
8220 register_duplicate (decl, existing);
8221 back_refs[~tag] = existing;
8222 if (inner_tag != 0)
8223 {
8224 existing_inner = DECL_TEMPLATE_RESULT (existing);
8225 back_refs[~inner_tag] = existing_inner;
8226 }
8227
8228 if (type_tag != 0)
8229 {
8230 tree existing_type = TREE_TYPE (existing);
8231 back_refs[~type_tag] = existing_type;
8232 if (stub_tag != 0)
8233 back_refs[~stub_tag] = TYPE_STUB_DECL (existing_type);
8234 }
8235 }
8236
8237 if (parm_tag)
8238 fn_parms_fini (tag: parm_tag, fn: inner, existing: existing_inner, has_defn);
8239
8240 if (!tree_node_vals (t: decl))
8241 goto bail;
8242
8243 if (inner_tag)
8244 {
8245 gcc_checking_assert (DECL_TEMPLATE_RESULT (decl) == inner);
8246
8247 if (!tree_node_vals (t: inner))
8248 goto bail;
8249
8250 if (!tpl_parms_fini (decl, tpl_levels))
8251 goto bail;
8252 }
8253
8254 if (type && (!tree_node_vals (t: type)
8255 || (stub_decl && !tree_node_vals (t: stub_decl))))
8256 goto bail;
8257
8258 spec_entry spec;
8259 unsigned spec_flags = 0;
8260 if (mk & MK_template_mask
8261 || mk == MK_partial
8262 || mk == MK_friend_spec)
8263 {
8264 if (mk == MK_partial)
8265 spec_flags = 2;
8266 else
8267 spec_flags = u ();
8268
8269 spec.tmpl = tree_node ();
8270 spec.args = tree_node ();
8271 }
8272 /* Hold constraints on the spec field, for a short while. */
8273 spec.spec = tree_node ();
8274
8275 dump (dumper::TREE) && dump ("Read:%d %C:%N", tag, TREE_CODE (decl), decl);
8276
8277 existing = back_refs[~tag];
8278 bool installed = install_entity (decl: existing);
8279 bool is_new = existing == decl;
8280
8281 if (DECL_LANG_SPECIFIC (inner)
8282 && DECL_MODULE_KEYED_DECLS_P (inner))
8283 {
8284 /* Read and maybe install the attached entities. */
8285 bool existed;
8286 auto &set = keyed_table->get_or_insert (STRIP_TEMPLATE (existing),
8287 existed: &existed);
8288 unsigned num = u ();
8289 if (is_new == existed)
8290 set_overrun ();
8291 if (is_new)
8292 set.reserve (nelems: num);
8293 for (unsigned ix = 0; !get_overrun () && ix != num; ix++)
8294 {
8295 tree attached = tree_node ();
8296 dump (dumper::MERGE)
8297 && dump ("Read %d[%u] %s attached decl %N", tag, ix,
8298 is_new ? "new" : "matched", attached);
8299 if (is_new)
8300 set.quick_push (obj: attached);
8301 else if (set[ix] != attached)
8302 set_overrun ();
8303 }
8304 }
8305
8306 /* Regular typedefs will have a NULL TREE_TYPE at this point. */
8307 unsigned tdef_flags = 0;
8308 bool is_typedef = false;
8309 if (!type && TREE_CODE (inner) == TYPE_DECL)
8310 {
8311 tdef_flags = u ();
8312 if (tdef_flags & 1)
8313 is_typedef = true;
8314 }
8315
8316 if (is_new)
8317 {
8318 /* A newly discovered node. */
8319 if (TREE_CODE (decl) == FUNCTION_DECL && DECL_VIRTUAL_P (decl))
8320 /* Mark this identifier as naming a virtual function --
8321 lookup_overrides relies on this optimization. */
8322 IDENTIFIER_VIRTUAL_P (DECL_NAME (decl)) = true;
8323
8324 if (installed)
8325 {
8326 /* Mark the entity as imported. */
8327 retrofit_lang_decl (inner);
8328 DECL_MODULE_IMPORT_P (inner) = true;
8329 }
8330
8331 if (spec.spec)
8332 set_constraints (decl, spec.spec);
8333
8334 if (TREE_CODE (decl) == INTEGER_CST && !TREE_OVERFLOW (decl))
8335 {
8336 decl = cache_integer_cst (decl, might_duplicate: true);
8337 back_refs[~tag] = decl;
8338 }
8339
8340 if (is_typedef)
8341 {
8342 /* Frob it to be ready for cloning. */
8343 TREE_TYPE (inner) = DECL_ORIGINAL_TYPE (inner);
8344 DECL_ORIGINAL_TYPE (inner) = NULL_TREE;
8345 set_underlying_type (inner);
8346 if (tdef_flags & 2)
8347 {
8348 /* Match instantiate_alias_template's handling. */
8349 tree type = TREE_TYPE (inner);
8350 TYPE_DEPENDENT_P (type) = true;
8351 TYPE_DEPENDENT_P_VALID (type) = true;
8352 SET_TYPE_STRUCTURAL_EQUALITY (type);
8353 }
8354 }
8355
8356 if (inner_tag)
8357 /* Set the TEMPLATE_DECL's type. */
8358 TREE_TYPE (decl) = TREE_TYPE (inner);
8359
8360 /* Redetermine whether we need to import or export this declaration
8361 for this TU. But for extern templates we know we must import:
8362 they'll be defined in a different TU.
8363 FIXME: How do dllexport and dllimport interact across a module?
8364 See also https://github.com/itanium-cxx-abi/cxx-abi/issues/170.
8365 May have to revisit? */
8366 if (type
8367 && CLASS_TYPE_P (type)
8368 && TYPE_LANG_SPECIFIC (type)
8369 && !(CLASSTYPE_EXPLICIT_INSTANTIATION (type)
8370 && CLASSTYPE_INTERFACE_KNOWN (type)
8371 && CLASSTYPE_INTERFACE_ONLY (type)))
8372 {
8373 CLASSTYPE_INTERFACE_ONLY (type) = false;
8374 CLASSTYPE_INTERFACE_UNKNOWN (type) = true;
8375 }
8376
8377 /* Add to specialization tables now that constraints etc are
8378 added. */
8379 if (mk == MK_partial)
8380 {
8381 bool is_type = TREE_CODE (inner) == TYPE_DECL;
8382 spec.spec = is_type ? type : inner;
8383 add_mergeable_specialization (is_decl: !is_type, &spec, outer: decl, spec_flags);
8384 }
8385 else if (mk & MK_template_mask)
8386 {
8387 bool is_type = !(mk & MK_tmpl_decl_mask);
8388 spec.spec = is_type ? type : mk & MK_tmpl_tmpl_mask ? inner : decl;
8389 add_mergeable_specialization (is_decl: !is_type, &spec, outer: decl, spec_flags);
8390 }
8391
8392 if (NAMESPACE_SCOPE_P (decl)
8393 && (mk == MK_named || mk == MK_unique
8394 || mk == MK_enum || mk == MK_friend_spec)
8395 && !(VAR_OR_FUNCTION_DECL_P (decl) && DECL_LOCAL_DECL_P (decl)))
8396 add_module_namespace_decl (CP_DECL_CONTEXT (decl), decl);
8397
8398 if (DECL_ARTIFICIAL (decl)
8399 && TREE_CODE (decl) == FUNCTION_DECL
8400 && !DECL_TEMPLATE_INFO (decl)
8401 && DECL_CONTEXT (decl) && TYPE_P (DECL_CONTEXT (decl))
8402 && TYPE_SIZE (DECL_CONTEXT (decl))
8403 && !DECL_THUNK_P (decl))
8404 /* A new implicit member function, when the class is
8405 complete. This means the importee declared it, and
8406 we must now add it to the class. Note that implicit
8407 member fns of template instantiations do not themselves
8408 look like templates. */
8409 if (!install_implicit_member (decl: inner))
8410 set_overrun ();
8411
8412 /* When importing a TLS wrapper from a header unit, we haven't
8413 actually emitted its definition yet. Remember it so we can
8414 do this later. */
8415 if (state->is_header ()
8416 && decl_tls_wrapper_p (decl))
8417 note_vague_linkage_fn (decl);
8418
8419 /* Setup aliases for the declaration. */
8420 if (tree alias = lookup_attribute (attr_name: "alias", DECL_ATTRIBUTES (decl)))
8421 {
8422 alias = TREE_VALUE (TREE_VALUE (alias));
8423 alias = get_identifier (TREE_STRING_POINTER (alias));
8424 assemble_alias (decl, alias);
8425 }
8426 }
8427 else
8428 {
8429 /* DECL is the to-be-discarded decl. Its internal pointers will
8430 be to the EXISTING's structure. Frob it to point to its
8431 own other structures, so loading its definition will alter
8432 it, and not the existing decl. */
8433 dump (dumper::MERGE) && dump ("Deduping %N", existing);
8434
8435 if (inner_tag)
8436 DECL_TEMPLATE_RESULT (decl) = inner;
8437
8438 if (type)
8439 {
8440 /* Point at the to-be-discarded type & decl. */
8441 TYPE_NAME (type) = inner;
8442 TREE_TYPE (inner) = type;
8443
8444 TYPE_STUB_DECL (type) = stub_decl ? stub_decl : inner;
8445 if (stub_decl)
8446 TREE_TYPE (stub_decl) = type;
8447 }
8448
8449 if (inner_tag)
8450 /* Set the TEMPLATE_DECL's type. */
8451 TREE_TYPE (decl) = TREE_TYPE (inner);
8452
8453 if (!is_matching_decl (existing, decl, is_typedef))
8454 unmatched_duplicate (existing);
8455
8456 if (TREE_CODE (inner) == FUNCTION_DECL)
8457 {
8458 tree e_inner = STRIP_TEMPLATE (existing);
8459 for (auto parm = DECL_ARGUMENTS (inner);
8460 parm; parm = DECL_CHAIN (parm))
8461 DECL_CONTEXT (parm) = e_inner;
8462 }
8463
8464 /* And our result is the existing node. */
8465 decl = existing;
8466 }
8467
8468 if (mk == MK_friend_spec)
8469 {
8470 tree e = match_mergeable_specialization (is_decl: true, &spec);
8471 if (!e)
8472 {
8473 spec.spec = inner;
8474 add_mergeable_specialization (is_decl: true, &spec, outer: decl, spec_flags);
8475 }
8476 else if (e != existing)
8477 set_overrun ();
8478 }
8479
8480 if (is_typedef)
8481 {
8482 /* Insert the type into the array now. */
8483 tag = insert (TREE_TYPE (decl));
8484 dump (dumper::TREE)
8485 && dump ("Cloned:%d typedef %C:%N",
8486 tag, TREE_CODE (TREE_TYPE (decl)), TREE_TYPE (decl));
8487 }
8488
8489 unused = saved_unused;
8490
8491 if (DECL_MAYBE_IN_CHARGE_CDTOR_P (decl))
8492 {
8493 unsigned flags = u ();
8494
8495 if (is_new)
8496 {
8497 bool cloned_p = flags & 1;
8498 dump (dumper::TREE) && dump ("CDTOR %N is %scloned",
8499 decl, cloned_p ? "" : "not ");
8500 if (cloned_p)
8501 build_cdtor_clones (decl, flags & 2, flags & 4,
8502 /* Update the member vec, if there is
8503 one (we're in a different cluster
8504 to the class defn). */
8505 CLASSTYPE_MEMBER_VEC (DECL_CONTEXT (decl)));
8506 }
8507 }
8508
8509 if (VAR_P (decl) && CP_DECL_THREAD_LOCAL_P (decl))
8510 {
8511 enum tls_model model = tls_model (u ());
8512 if (is_new)
8513 set_decl_tls_model (decl, model);
8514 }
8515
8516 if (!NAMESPACE_SCOPE_P (inner)
8517 && ((TREE_CODE (inner) == TYPE_DECL
8518 && !is_typedef
8519 && TYPE_NAME (TREE_TYPE (inner)) == inner)
8520 || TREE_CODE (inner) == FUNCTION_DECL)
8521 && u ())
8522 read_definition (decl);
8523
8524 return decl;
8525}
8526
8527/* DECL is an unnameable member of CTX. Return a suitable identifying
8528 index. */
8529
8530static unsigned
8531get_field_ident (tree ctx, tree decl)
8532{
8533 gcc_checking_assert (TREE_CODE (decl) == USING_DECL
8534 || !DECL_NAME (decl)
8535 || IDENTIFIER_ANON_P (DECL_NAME (decl)));
8536
8537 unsigned ix = 0;
8538 for (tree fields = TYPE_FIELDS (ctx);
8539 fields; fields = DECL_CHAIN (fields))
8540 {
8541 if (fields == decl)
8542 return ix;
8543
8544 if (DECL_CONTEXT (fields) == ctx
8545 && (TREE_CODE (fields) == USING_DECL
8546 || (TREE_CODE (fields) == FIELD_DECL
8547 && (!DECL_NAME (fields)
8548 || IDENTIFIER_ANON_P (DECL_NAME (fields))))))
8549 /* Count this field. */
8550 ix++;
8551 }
8552 gcc_unreachable ();
8553}
8554
8555static tree
8556lookup_field_ident (tree ctx, unsigned ix)
8557{
8558 for (tree fields = TYPE_FIELDS (ctx);
8559 fields; fields = DECL_CHAIN (fields))
8560 if (DECL_CONTEXT (fields) == ctx
8561 && (TREE_CODE (fields) == USING_DECL
8562 || (TREE_CODE (fields) == FIELD_DECL
8563 && (!DECL_NAME (fields)
8564 || IDENTIFIER_ANON_P (DECL_NAME (fields))))))
8565 if (!ix--)
8566 return fields;
8567
8568 return NULL_TREE;
8569}
8570
8571/* Reference DECL. REF indicates the walk kind we are performing.
8572 Return true if we should write this decl by value. */
8573
8574bool
8575trees_out::decl_node (tree decl, walk_kind ref)
8576{
8577 gcc_checking_assert (DECL_P (decl) && !DECL_TEMPLATE_PARM_P (decl)
8578 && DECL_CONTEXT (decl));
8579
8580 if (ref == WK_value)
8581 {
8582 depset *dep = dep_hash->find_dependency (entity: decl);
8583 decl_value (decl, dep);
8584 return false;
8585 }
8586
8587 switch (TREE_CODE (decl))
8588 {
8589 default:
8590 break;
8591
8592 case FUNCTION_DECL:
8593 gcc_checking_assert (!DECL_LOCAL_DECL_P (decl));
8594 break;
8595
8596 case RESULT_DECL:
8597 /* Unlike PARM_DECLs, RESULT_DECLs are only generated and
8598 referenced when we're inside the function itself. */
8599 return true;
8600
8601 case PARM_DECL:
8602 {
8603 if (streaming_p ())
8604 i (v: tt_parm);
8605 tree_node (DECL_CONTEXT (decl));
8606 if (streaming_p ())
8607 {
8608 /* That must have put this in the map. */
8609 walk_kind ref = ref_node (t: decl);
8610 if (ref != WK_none)
8611 // FIXME:OPTIMIZATION We can wander into bits of the
8612 // template this was instantiated from. For instance
8613 // deferred noexcept and default parms. Currently we'll
8614 // end up cloning those bits of tree. It would be nice
8615 // to reference those specific nodes. I think putting
8616 // those things in the map when we reference their
8617 // template by name. See the note in add_indirects.
8618 return true;
8619
8620 dump (dumper::TREE)
8621 && dump ("Wrote %s reference %N",
8622 TREE_CODE (decl) == PARM_DECL ? "parameter" : "result",
8623 decl);
8624 }
8625 }
8626 return false;
8627
8628 case IMPORTED_DECL:
8629 /* This describes a USING_DECL to the ME's debug machinery. It
8630 originates from the fortran FE, and has nothing to do with
8631 C++ modules. */
8632 return true;
8633
8634 case LABEL_DECL:
8635 return true;
8636
8637 case CONST_DECL:
8638 {
8639 /* If I end up cloning enum decls, implementing C++20 using
8640 E::v, this will need tweaking. */
8641 if (streaming_p ())
8642 i (v: tt_enum_decl);
8643 tree ctx = DECL_CONTEXT (decl);
8644 gcc_checking_assert (TREE_CODE (ctx) == ENUMERAL_TYPE);
8645 tree_node (ctx);
8646 tree_node (DECL_NAME (decl));
8647
8648 int tag = insert (t: decl);
8649 if (streaming_p ())
8650 dump (dumper::TREE)
8651 && dump ("Wrote enum decl:%d %C:%N", tag, TREE_CODE (decl), decl);
8652 return false;
8653 }
8654 break;
8655
8656 case USING_DECL:
8657 if (TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL)
8658 break;
8659 /* FALLTHROUGH */
8660
8661 case FIELD_DECL:
8662 {
8663 if (streaming_p ())
8664 i (v: tt_data_member);
8665
8666 tree ctx = DECL_CONTEXT (decl);
8667 tree_node (ctx);
8668
8669 tree name = NULL_TREE;
8670
8671 if (TREE_CODE (decl) == USING_DECL)
8672 ;
8673 else
8674 {
8675 name = DECL_NAME (decl);
8676 if (name && IDENTIFIER_ANON_P (name))
8677 name = NULL_TREE;
8678 }
8679
8680 tree_node (name);
8681 if (!name && streaming_p ())
8682 {
8683 unsigned ix = get_field_ident (ctx, decl);
8684 u (v: ix);
8685 }
8686
8687 int tag = insert (t: decl);
8688 if (streaming_p ())
8689 dump (dumper::TREE)
8690 && dump ("Wrote member:%d %C:%N", tag, TREE_CODE (decl), decl);
8691 return false;
8692 }
8693 break;
8694
8695 case VAR_DECL:
8696 gcc_checking_assert (!DECL_LOCAL_DECL_P (decl));
8697 if (DECL_VTABLE_OR_VTT_P (decl))
8698 {
8699 /* VTT or VTABLE, they are all on the vtables list. */
8700 tree ctx = CP_DECL_CONTEXT (decl);
8701 tree vtable = CLASSTYPE_VTABLES (ctx);
8702 for (unsigned ix = 0; ; vtable = DECL_CHAIN (vtable), ix++)
8703 if (vtable == decl)
8704 {
8705 gcc_checking_assert (DECL_VIRTUAL_P (decl));
8706 if (streaming_p ())
8707 {
8708 u (v: tt_vtable);
8709 u (v: ix);
8710 dump (dumper::TREE)
8711 && dump ("Writing vtable %N[%u]", ctx, ix);
8712 }
8713 tree_node (ctx);
8714 return false;
8715 }
8716 gcc_unreachable ();
8717 }
8718
8719 if (DECL_TINFO_P (decl))
8720 {
8721 tinfo:
8722 /* A typeinfo, tt_tinfo_typedef or tt_tinfo_var. */
8723 bool is_var = VAR_P (decl);
8724 tree type = TREE_TYPE (decl);
8725 unsigned ix = get_pseudo_tinfo_index (type);
8726 if (streaming_p ())
8727 {
8728 i (v: is_var ? tt_tinfo_var : tt_tinfo_typedef);
8729 u (v: ix);
8730 }
8731
8732 if (is_var)
8733 {
8734 /* We also need the type it is for and mangled name, so
8735 the reader doesn't need to complete the type (which
8736 would break section ordering). The type it is for is
8737 stashed on the name's TREE_TYPE. */
8738 tree name = DECL_NAME (decl);
8739 tree_node (name);
8740 type = TREE_TYPE (name);
8741 tree_node (type);
8742 }
8743
8744 int tag = insert (t: decl);
8745 if (streaming_p ())
8746 dump (dumper::TREE)
8747 && dump ("Wrote tinfo_%s:%d %u %N", is_var ? "var" : "type",
8748 tag, ix, type);
8749
8750 if (!is_var)
8751 {
8752 tag = insert (t: type);
8753 if (streaming_p ())
8754 dump (dumper::TREE)
8755 && dump ("Wrote tinfo_type:%d %u %N", tag, ix, type);
8756 }
8757 return false;
8758 }
8759
8760 if (DECL_NTTP_OBJECT_P (decl))
8761 {
8762 /* A NTTP parm object. */
8763 if (streaming_p ())
8764 i (v: tt_nttp_var);
8765 tree_node (tparm_object_argument (decl));
8766 tree_node (DECL_NAME (decl));
8767 int tag = insert (t: decl);
8768 if (streaming_p ())
8769 dump (dumper::TREE)
8770 && dump ("Wrote nttp object:%d %N", tag, DECL_NAME (decl));
8771 return false;
8772 }
8773
8774 break;
8775
8776 case TYPE_DECL:
8777 if (DECL_TINFO_P (decl))
8778 goto tinfo;
8779 break;
8780 }
8781
8782 if (DECL_THUNK_P (decl))
8783 {
8784 /* Thunks are similar to binfos -- write the thunked-to decl and
8785 then thunk-specific key info. */
8786 if (streaming_p ())
8787 {
8788 i (v: tt_thunk);
8789 i (THUNK_FIXED_OFFSET (decl));
8790 }
8791
8792 tree target = decl;
8793 while (DECL_THUNK_P (target))
8794 target = THUNK_TARGET (target);
8795 tree_node (target);
8796 tree_node (THUNK_VIRTUAL_OFFSET (decl));
8797 int tag = insert (t: decl);
8798 if (streaming_p ())
8799 dump (dumper::TREE)
8800 && dump ("Wrote:%d thunk %N to %N", tag, DECL_NAME (decl), target);
8801 return false;
8802 }
8803
8804 if (DECL_CLONED_FUNCTION_P (decl))
8805 {
8806 tree target = get_clone_target (decl);
8807 if (streaming_p ())
8808 i (v: tt_clone_ref);
8809
8810 tree_node (target);
8811 tree_node (DECL_NAME (decl));
8812 if (DECL_VIRTUAL_P (decl))
8813 tree_node (DECL_VINDEX (decl));
8814 int tag = insert (t: decl);
8815 if (streaming_p ())
8816 dump (dumper::TREE)
8817 && dump ("Wrote:%d clone %N of %N", tag, DECL_NAME (decl), target);
8818 return false;
8819 }
8820
8821 /* Everything left should be a thing that is in the entity table.
8822 Mostly things that can be defined outside of their (original
8823 declaration) context. */
8824 gcc_checking_assert (TREE_CODE (decl) == TEMPLATE_DECL
8825 || VAR_P (decl)
8826 || TREE_CODE (decl) == FUNCTION_DECL
8827 || TREE_CODE (decl) == TYPE_DECL
8828 || TREE_CODE (decl) == USING_DECL
8829 || TREE_CODE (decl) == CONCEPT_DECL
8830 || TREE_CODE (decl) == NAMESPACE_DECL);
8831
8832 int use_tpl = -1;
8833 tree ti = node_template_info (decl, use&: use_tpl);
8834 tree tpl = NULL_TREE;
8835
8836 /* If this is the TEMPLATE_DECL_RESULT of a TEMPLATE_DECL, get the
8837 TEMPLATE_DECL. Note TI_TEMPLATE is not a TEMPLATE_DECL for
8838 (some) friends, so we need to check that. */
8839 // FIXME: Should local friend template specializations be by value?
8840 // They don't get idents so we'll never know they're imported, but I
8841 // think we can only reach them from the TU that defines the
8842 // befriending class?
8843 if (ti && TREE_CODE (TI_TEMPLATE (ti)) == TEMPLATE_DECL
8844 && DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == decl)
8845 {
8846 tpl = TI_TEMPLATE (ti);
8847 partial_template:
8848 if (streaming_p ())
8849 {
8850 i (v: tt_template);
8851 dump (dumper::TREE)
8852 && dump ("Writing implicit template %C:%N%S",
8853 TREE_CODE (tpl), tpl, tpl);
8854 }
8855 tree_node (tpl);
8856
8857 /* Streaming TPL caused us to visit DECL and maybe its type. */
8858 gcc_checking_assert (TREE_VISITED (decl));
8859 if (DECL_IMPLICIT_TYPEDEF_P (decl))
8860 gcc_checking_assert (TREE_VISITED (TREE_TYPE (decl)));
8861 return false;
8862 }
8863
8864 tree ctx = CP_DECL_CONTEXT (decl);
8865 depset *dep = NULL;
8866 if (streaming_p ())
8867 dep = dep_hash->find_dependency (entity: decl);
8868 else if (TREE_CODE (ctx) != FUNCTION_DECL
8869 || TREE_CODE (decl) == TEMPLATE_DECL
8870 || DECL_IMPLICIT_TYPEDEF_P (decl)
8871 || (DECL_LANG_SPECIFIC (decl)
8872 && DECL_MODULE_IMPORT_P (decl)))
8873 {
8874 auto kind = (TREE_CODE (decl) == NAMESPACE_DECL
8875 && !DECL_NAMESPACE_ALIAS (decl)
8876 ? depset::EK_NAMESPACE : depset::EK_DECL);
8877 dep = dep_hash->add_dependency (decl, kind);
8878 }
8879
8880 if (!dep)
8881 {
8882 /* Some internal entity of context. Do by value. */
8883 decl_value (decl, NULL);
8884 return false;
8885 }
8886
8887 if (dep->get_entity_kind () == depset::EK_REDIRECT)
8888 {
8889 /* The DECL_TEMPLATE_RESULT of a partial specialization.
8890 Write the partial specialization's template. */
8891 depset *redirect = dep->deps[0];
8892 gcc_checking_assert (redirect->get_entity_kind () == depset::EK_PARTIAL);
8893 tpl = redirect->get_entity ();
8894 goto partial_template;
8895 }
8896
8897 if (streaming_p ())
8898 {
8899 /* Locate the entity. */
8900 unsigned index = dep->cluster;
8901 unsigned import = 0;
8902
8903 if (dep->is_import ())
8904 import = dep->section;
8905 else if (CHECKING_P)
8906 /* It should be what we put there. */
8907 gcc_checking_assert (index == ~import_entity_index (decl));
8908
8909#if CHECKING_P
8910 gcc_assert (!import || importedness >= 0);
8911#endif
8912 i (v: tt_entity);
8913 u (v: import);
8914 u (v: index);
8915 }
8916
8917 int tag = insert (t: decl);
8918 if (streaming_p () && dump (dumper::TREE))
8919 {
8920 char const *kind = "import";
8921 module_state *from = (*modules)[0];
8922 if (dep->is_import ())
8923 /* Rediscover the unremapped index. */
8924 from = import_entity_module (index: import_entity_index (decl));
8925 else
8926 {
8927 tree o = get_originating_module_decl (decl);
8928 o = STRIP_TEMPLATE (o);
8929 kind = (DECL_LANG_SPECIFIC (o) && DECL_MODULE_PURVIEW_P (o)
8930 ? "purview" : "GMF");
8931 }
8932 dump ("Wrote %s:%d %C:%N@%M", kind,
8933 tag, TREE_CODE (decl), decl, from);
8934 }
8935
8936 add_indirects (decl);
8937
8938 return false;
8939}
8940
8941void
8942trees_out::type_node (tree type)
8943{
8944 gcc_assert (TYPE_P (type));
8945
8946 tree root = (TYPE_NAME (type)
8947 ? TREE_TYPE (TYPE_NAME (type)) : TYPE_MAIN_VARIANT (type));
8948
8949 if (type != root)
8950 {
8951 if (streaming_p ())
8952 i (v: tt_variant_type);
8953 tree_node (root);
8954
8955 int flags = -1;
8956
8957 if (TREE_CODE (type) == FUNCTION_TYPE
8958 || TREE_CODE (type) == METHOD_TYPE)
8959 {
8960 int quals = type_memfn_quals (type);
8961 int rquals = type_memfn_rqual (type);
8962 tree raises = TYPE_RAISES_EXCEPTIONS (type);
8963 bool late = TYPE_HAS_LATE_RETURN_TYPE (type);
8964
8965 if (raises != TYPE_RAISES_EXCEPTIONS (root)
8966 || rquals != type_memfn_rqual (root)
8967 || quals != type_memfn_quals (root)
8968 || late != TYPE_HAS_LATE_RETURN_TYPE (root))
8969 flags = rquals | (int (late) << 2) | (quals << 3);
8970 }
8971 else
8972 {
8973 if (TYPE_USER_ALIGN (type))
8974 flags = TYPE_ALIGN_RAW (type);
8975 }
8976
8977 if (streaming_p ())
8978 i (v: flags);
8979
8980 if (flags < 0)
8981 ;
8982 else if (TREE_CODE (type) == FUNCTION_TYPE
8983 || TREE_CODE (type) == METHOD_TYPE)
8984 {
8985 tree raises = TYPE_RAISES_EXCEPTIONS (type);
8986 if (raises == TYPE_RAISES_EXCEPTIONS (root))
8987 raises = error_mark_node;
8988 tree_node (raises);
8989 }
8990
8991 tree_node (TYPE_ATTRIBUTES (type));
8992
8993 if (streaming_p ())
8994 {
8995 /* Qualifiers. */
8996 int rquals = cp_type_quals (root);
8997 int quals = cp_type_quals (type);
8998 if (quals == rquals)
8999 quals = -1;
9000 i (v: quals);
9001 }
9002
9003 if (ref_node (t: type) != WK_none)
9004 {
9005 int tag = insert (t: type);
9006 if (streaming_p ())
9007 {
9008 i (v: 0);
9009 dump (dumper::TREE)
9010 && dump ("Wrote:%d variant type %C", tag, TREE_CODE (type));
9011 }
9012 }
9013 return;
9014 }
9015
9016 if (tree name = TYPE_NAME (type))
9017 if ((TREE_CODE (name) == TYPE_DECL && DECL_ORIGINAL_TYPE (name))
9018 || DECL_TEMPLATE_PARM_P (name)
9019 || TREE_CODE (type) == RECORD_TYPE
9020 || TREE_CODE (type) == UNION_TYPE
9021 || TREE_CODE (type) == ENUMERAL_TYPE)
9022 {
9023 /* We can meet template parms that we didn't meet in the
9024 tpl_parms walk, because we're referring to a derived type
9025 that was previously constructed from equivalent template
9026 parms. */
9027 if (streaming_p ())
9028 {
9029 i (v: tt_typedef_type);
9030 dump (dumper::TREE)
9031 && dump ("Writing %stypedef %C:%N",
9032 DECL_IMPLICIT_TYPEDEF_P (name) ? "implicit " : "",
9033 TREE_CODE (name), name);
9034 }
9035 tree_node (name);
9036 if (streaming_p ())
9037 dump (dumper::TREE) && dump ("Wrote typedef %C:%N%S",
9038 TREE_CODE (name), name, name);
9039 gcc_checking_assert (TREE_VISITED (type));
9040 return;
9041 }
9042
9043 if (TYPE_PTRMEMFUNC_P (type))
9044 {
9045 /* This is a distinct type node, masquerading as a structure. */
9046 tree fn_type = TYPE_PTRMEMFUNC_FN_TYPE (type);
9047 if (streaming_p ())
9048 i (v: tt_ptrmem_type);
9049 tree_node (fn_type);
9050 int tag = insert (t: type);
9051 if (streaming_p ())
9052 dump (dumper::TREE) && dump ("Written:%d ptrmem type", tag);
9053 return;
9054 }
9055
9056 if (streaming_p ())
9057 {
9058 u (v: tt_derived_type);
9059 u (TREE_CODE (type));
9060 }
9061
9062 tree_node (TREE_TYPE (type));
9063 switch (TREE_CODE (type))
9064 {
9065 default:
9066 /* We should never meet a type here that is indescribable in
9067 terms of other types. */
9068 gcc_unreachable ();
9069
9070 case ARRAY_TYPE:
9071 tree_node (TYPE_DOMAIN (type));
9072 if (streaming_p ())
9073 /* Dependent arrays are constructed with TYPE_DEPENENT_P
9074 already set. */
9075 u (TYPE_DEPENDENT_P (type));
9076 break;
9077
9078 case COMPLEX_TYPE:
9079 /* No additional data. */
9080 break;
9081
9082 case BOOLEAN_TYPE:
9083 /* A non-standard boolean type. */
9084 if (streaming_p ())
9085 u (TYPE_PRECISION (type));
9086 break;
9087
9088 case INTEGER_TYPE:
9089 if (TREE_TYPE (type))
9090 {
9091 /* A range type (representing an array domain). */
9092 tree_node (TYPE_MIN_VALUE (type));
9093 tree_node (TYPE_MAX_VALUE (type));
9094 }
9095 else
9096 {
9097 /* A new integral type (representing a bitfield). */
9098 if (streaming_p ())
9099 {
9100 unsigned prec = TYPE_PRECISION (type);
9101 bool unsigned_p = TYPE_UNSIGNED (type);
9102
9103 u (v: (prec << 1) | unsigned_p);
9104 }
9105 }
9106 break;
9107
9108 case METHOD_TYPE:
9109 case FUNCTION_TYPE:
9110 {
9111 gcc_checking_assert (type_memfn_rqual (type) == REF_QUAL_NONE);
9112
9113 tree arg_types = TYPE_ARG_TYPES (type);
9114 if (TREE_CODE (type) == METHOD_TYPE)
9115 {
9116 tree_node (TREE_TYPE (TREE_VALUE (arg_types)));
9117 arg_types = TREE_CHAIN (arg_types);
9118 }
9119 tree_node (arg_types);
9120 }
9121 break;
9122
9123 case OFFSET_TYPE:
9124 tree_node (TYPE_OFFSET_BASETYPE (type));
9125 break;
9126
9127 case POINTER_TYPE:
9128 /* No additional data. */
9129 break;
9130
9131 case REFERENCE_TYPE:
9132 if (streaming_p ())
9133 u (TYPE_REF_IS_RVALUE (type));
9134 break;
9135
9136 case DECLTYPE_TYPE:
9137 case TYPEOF_TYPE:
9138 case DEPENDENT_OPERATOR_TYPE:
9139 tree_node (TYPE_VALUES_RAW (type));
9140 if (TREE_CODE (type) == DECLTYPE_TYPE)
9141 /* We stash a whole bunch of things into decltype's
9142 flags. */
9143 if (streaming_p ())
9144 tree_node_bools (t: type);
9145 break;
9146
9147 case TRAIT_TYPE:
9148 tree_node (TRAIT_TYPE_KIND_RAW (type));
9149 tree_node (TRAIT_TYPE_TYPE1 (type));
9150 tree_node (TRAIT_TYPE_TYPE2 (type));
9151 break;
9152
9153 case TYPE_ARGUMENT_PACK:
9154 /* No additional data. */
9155 break;
9156
9157 case TYPE_PACK_EXPANSION:
9158 if (streaming_p ())
9159 u (PACK_EXPANSION_LOCAL_P (type));
9160 tree_node (PACK_EXPANSION_PARAMETER_PACKS (type));
9161 tree_node (PACK_EXPANSION_EXTRA_ARGS (type));
9162 break;
9163
9164 case TYPENAME_TYPE:
9165 {
9166 tree_node (TYPE_CONTEXT (type));
9167 tree_node (DECL_NAME (TYPE_NAME (type)));
9168 tree_node (TYPENAME_TYPE_FULLNAME (type));
9169 if (streaming_p ())
9170 {
9171 enum tag_types tag_type = none_type;
9172 if (TYPENAME_IS_ENUM_P (type))
9173 tag_type = enum_type;
9174 else if (TYPENAME_IS_CLASS_P (type))
9175 tag_type = class_type;
9176 u (v: int (tag_type));
9177 }
9178 }
9179 break;
9180
9181 case UNBOUND_CLASS_TEMPLATE:
9182 {
9183 tree decl = TYPE_NAME (type);
9184 tree_node (DECL_CONTEXT (decl));
9185 tree_node (DECL_NAME (decl));
9186 tree_node (DECL_TEMPLATE_PARMS (decl));
9187 }
9188 break;
9189
9190 case VECTOR_TYPE:
9191 if (streaming_p ())
9192 {
9193 poly_uint64 nunits = TYPE_VECTOR_SUBPARTS (node: type);
9194 for (unsigned ix = 0; ix != NUM_POLY_INT_COEFFS; ix++)
9195 wu (v: nunits.coeffs[ix]);
9196 }
9197 break;
9198 }
9199
9200 /* We may have met the type during emitting the above. */
9201 if (ref_node (t: type) != WK_none)
9202 {
9203 int tag = insert (t: type);
9204 if (streaming_p ())
9205 {
9206 i (v: 0);
9207 dump (dumper::TREE)
9208 && dump ("Wrote:%d derived type %C", tag, TREE_CODE (type));
9209 }
9210 }
9211
9212 return;
9213}
9214
9215/* T is (mostly*) a non-mergeable node that must be written by value.
9216 The mergeable case is a BINFO, which are as-if DECLSs. */
9217
9218void
9219trees_out::tree_value (tree t)
9220{
9221 /* We should never be writing a type by value. tree_type should
9222 have streamed it, or we're going via its TYPE_DECL. */
9223 gcc_checking_assert (!TYPE_P (t));
9224
9225 if (DECL_P (t))
9226 /* No template, type, var or function, except anonymous
9227 non-context vars. */
9228 gcc_checking_assert ((TREE_CODE (t) != TEMPLATE_DECL
9229 && TREE_CODE (t) != TYPE_DECL
9230 && (TREE_CODE (t) != VAR_DECL
9231 || (!DECL_NAME (t) && !DECL_CONTEXT (t)))
9232 && TREE_CODE (t) != FUNCTION_DECL));
9233
9234 if (streaming_p ())
9235 {
9236 /* A new node -> tt_node. */
9237 tree_val_count++;
9238 i (v: tt_node);
9239 start (t);
9240 tree_node_bools (t);
9241 }
9242
9243 if (TREE_CODE (t) == TREE_BINFO)
9244 /* Binfos are decl-like and need merging information. */
9245 binfo_mergeable (binfo: t);
9246
9247 int tag = insert (t, walk: WK_value);
9248 if (streaming_p ())
9249 dump (dumper::TREE)
9250 && dump ("Writing tree:%d %C:%N", tag, TREE_CODE (t), t);
9251
9252 tree_node_vals (t);
9253
9254 if (streaming_p ())
9255 dump (dumper::TREE) && dump ("Written tree:%d %C:%N", tag, TREE_CODE (t), t);
9256}
9257
9258tree
9259trees_in::tree_value ()
9260{
9261 tree t = start ();
9262 if (!t || !tree_node_bools (t))
9263 return NULL_TREE;
9264
9265 tree existing = t;
9266 if (TREE_CODE (t) == TREE_BINFO)
9267 {
9268 tree type;
9269 unsigned ix = binfo_mergeable (&type);
9270 if (TYPE_BINFO (type))
9271 {
9272 /* We already have a definition, this must be a duplicate. */
9273 dump (dumper::MERGE)
9274 && dump ("Deduping binfo %N[%u]", type, ix);
9275 existing = TYPE_BINFO (type);
9276 while (existing && ix--)
9277 existing = TREE_CHAIN (existing);
9278 if (existing)
9279 register_duplicate (decl: t, existing);
9280 else
9281 /* Error, mismatch -- diagnose in read_class_def's
9282 checking. */
9283 existing = t;
9284 }
9285 }
9286
9287 /* Insert into map. */
9288 int tag = insert (t: existing);
9289 dump (dumper::TREE)
9290 && dump ("Reading tree:%d %C", tag, TREE_CODE (t));
9291
9292 if (!tree_node_vals (t))
9293 {
9294 back_refs[~tag] = NULL_TREE;
9295 set_overrun ();
9296 /* Bail. */
9297 return NULL_TREE;
9298 }
9299
9300 if (TREE_CODE (t) == LAMBDA_EXPR
9301 && CLASSTYPE_LAMBDA_EXPR (TREE_TYPE (t)))
9302 {
9303 existing = CLASSTYPE_LAMBDA_EXPR (TREE_TYPE (t));
9304 back_refs[~tag] = existing;
9305 }
9306
9307 dump (dumper::TREE) && dump ("Read tree:%d %C:%N", tag, TREE_CODE (t), t);
9308
9309 if (TREE_CODE (existing) == INTEGER_CST && !TREE_OVERFLOW (existing))
9310 {
9311 existing = cache_integer_cst (t, might_duplicate: true);
9312 back_refs[~tag] = existing;
9313 }
9314
9315 return existing;
9316}
9317
9318/* Stream out tree node T. We automatically create local back
9319 references, which is essentially a single pass lisp
9320 self-referential structure pretty-printer. */
9321
9322void
9323trees_out::tree_node (tree t)
9324{
9325 dump.indent ();
9326 walk_kind ref = ref_node (t);
9327 if (ref == WK_none)
9328 goto done;
9329
9330 if (ref != WK_normal)
9331 goto skip_normal;
9332
9333 if (TREE_CODE (t) == IDENTIFIER_NODE)
9334 {
9335 /* An identifier node -> tt_id, tt_conv_id, tt_anon_id, tt_lambda_id. */
9336 int code = tt_id;
9337 if (IDENTIFIER_ANON_P (t))
9338 code = IDENTIFIER_LAMBDA_P (t) ? tt_lambda_id : tt_anon_id;
9339 else if (IDENTIFIER_CONV_OP_P (t))
9340 code = tt_conv_id;
9341
9342 if (streaming_p ())
9343 i (v: code);
9344
9345 if (code == tt_conv_id)
9346 {
9347 tree type = TREE_TYPE (t);
9348 gcc_checking_assert (type || t == conv_op_identifier);
9349 tree_node (t: type);
9350 }
9351 else if (code == tt_id && streaming_p ())
9352 str (IDENTIFIER_POINTER (t), IDENTIFIER_LENGTH (t));
9353
9354 int tag = insert (t);
9355 if (streaming_p ())
9356 {
9357 /* We know the ordering of the 4 id tags. */
9358 static const char *const kinds[] =
9359 {"", "conv_op ", "anon ", "lambda "};
9360 dump (dumper::TREE)
9361 && dump ("Written:%d %sidentifier:%N", tag,
9362 kinds[code - tt_id],
9363 code == tt_conv_id ? TREE_TYPE (t) : t);
9364 }
9365 goto done;
9366 }
9367
9368 if (TREE_CODE (t) == TREE_BINFO)
9369 {
9370 /* A BINFO -> tt_binfo.
9371 We must do this by reference. We stream the binfo tree
9372 itself when streaming its owning RECORD_TYPE. That we got
9373 here means the dominating type is not in this SCC. */
9374 if (streaming_p ())
9375 i (v: tt_binfo);
9376 binfo_mergeable (binfo: t);
9377 gcc_checking_assert (!TREE_VISITED (t));
9378 int tag = insert (t);
9379 if (streaming_p ())
9380 dump (dumper::TREE) && dump ("Inserting binfo:%d %N", tag, t);
9381 goto done;
9382 }
9383
9384 if (TREE_CODE (t) == INTEGER_CST
9385 && !TREE_OVERFLOW (t)
9386 && TREE_CODE (TREE_TYPE (t)) == ENUMERAL_TYPE)
9387 {
9388 /* An integral constant of enumeral type. See if it matches one
9389 of the enumeration values. */
9390 for (tree values = TYPE_VALUES (TREE_TYPE (t));
9391 values; values = TREE_CHAIN (values))
9392 {
9393 tree decl = TREE_VALUE (values);
9394 if (tree_int_cst_equal (DECL_INITIAL (decl), t))
9395 {
9396 if (streaming_p ())
9397 u (v: tt_enum_value);
9398 tree_node (t: decl);
9399 dump (dumper::TREE) && dump ("Written enum value %N", decl);
9400 goto done;
9401 }
9402 }
9403 /* It didn't match. We'll write it a an explicit INTEGER_CST
9404 node. */
9405 }
9406
9407 if (TYPE_P (t))
9408 {
9409 type_node (type: t);
9410 goto done;
9411 }
9412
9413 if (DECL_P (t))
9414 {
9415 if (DECL_TEMPLATE_PARM_P (t))
9416 {
9417 tpl_parm_value (parm: t);
9418 goto done;
9419 }
9420
9421 if (!DECL_CONTEXT (t))
9422 {
9423 /* There are a few cases of decls with no context. We'll write
9424 these by value, but first assert they are cases we expect. */
9425 gcc_checking_assert (ref == WK_normal);
9426 switch (TREE_CODE (t))
9427 {
9428 default: gcc_unreachable ();
9429
9430 case LABEL_DECL:
9431 /* CASE_LABEL_EXPRs contain uncontexted LABEL_DECLs. */
9432 gcc_checking_assert (!DECL_NAME (t));
9433 break;
9434
9435 case VAR_DECL:
9436 /* AGGR_INIT_EXPRs cons up anonymous uncontexted VAR_DECLs. */
9437 gcc_checking_assert (!DECL_NAME (t)
9438 && DECL_ARTIFICIAL (t));
9439 break;
9440
9441 case PARM_DECL:
9442 /* REQUIRES_EXPRs have a tree list of uncontexted
9443 PARM_DECLS. It'd be nice if they had a
9444 distinguishing flag to double check. */
9445 break;
9446 }
9447 goto by_value;
9448 }
9449 }
9450
9451 skip_normal:
9452 if (DECL_P (t) && !decl_node (decl: t, ref))
9453 goto done;
9454
9455 /* Otherwise by value */
9456 by_value:
9457 tree_value (t);
9458
9459 done:
9460 /* And, breath out. */
9461 dump.outdent ();
9462}
9463
9464/* Stream in a tree node. */
9465
9466tree
9467trees_in::tree_node (bool is_use)
9468{
9469 if (get_overrun ())
9470 return NULL_TREE;
9471
9472 dump.indent ();
9473 int tag = i ();
9474 tree res = NULL_TREE;
9475 switch (tag)
9476 {
9477 default:
9478 /* backref, pull it out of the map. */
9479 res = back_ref (tag);
9480 break;
9481
9482 case tt_null:
9483 /* NULL_TREE. */
9484 break;
9485
9486 case tt_fixed:
9487 /* A fixed ref, find it in the fixed_ref array. */
9488 {
9489 unsigned fix = u ();
9490 if (fix < (*fixed_trees).length ())
9491 {
9492 res = (*fixed_trees)[fix];
9493 dump (dumper::TREE) && dump ("Read fixed:%u %C:%N%S", fix,
9494 TREE_CODE (res), res, res);
9495 }
9496
9497 if (!res)
9498 set_overrun ();
9499 }
9500 break;
9501
9502 case tt_parm:
9503 {
9504 tree fn = tree_node ();
9505 if (fn && TREE_CODE (fn) == FUNCTION_DECL)
9506 res = tree_node ();
9507 if (res)
9508 dump (dumper::TREE)
9509 && dump ("Read %s reference %N",
9510 TREE_CODE (res) == PARM_DECL ? "parameter" : "result",
9511 res);
9512 }
9513 break;
9514
9515 case tt_node:
9516 /* A new node. Stream it in. */
9517 res = tree_value ();
9518 break;
9519
9520 case tt_decl:
9521 /* A new decl. Stream it in. */
9522 res = decl_value ();
9523 break;
9524
9525 case tt_tpl_parm:
9526 /* A template parameter. Stream it in. */
9527 res = tpl_parm_value ();
9528 break;
9529
9530 case tt_id:
9531 /* An identifier node. */
9532 {
9533 size_t l;
9534 const char *chars = str (len_p: &l);
9535 res = get_identifier_with_length (chars, l);
9536 int tag = insert (t: res);
9537 dump (dumper::TREE)
9538 && dump ("Read identifier:%d %N", tag, res);
9539 }
9540 break;
9541
9542 case tt_conv_id:
9543 /* A conversion operator. Get the type and recreate the
9544 identifier. */
9545 {
9546 tree type = tree_node ();
9547 if (!get_overrun ())
9548 {
9549 res = type ? make_conv_op_name (type) : conv_op_identifier;
9550 int tag = insert (t: res);
9551 dump (dumper::TREE)
9552 && dump ("Created conv_op:%d %S for %N", tag, res, type);
9553 }
9554 }
9555 break;
9556
9557 case tt_anon_id:
9558 case tt_lambda_id:
9559 /* An anonymous or lambda id. */
9560 {
9561 res = make_anon_name ();
9562 if (tag == tt_lambda_id)
9563 IDENTIFIER_LAMBDA_P (res) = true;
9564 int tag = insert (t: res);
9565 dump (dumper::TREE)
9566 && dump ("Read %s identifier:%d %N",
9567 IDENTIFIER_LAMBDA_P (res) ? "lambda" : "anon", tag, res);
9568 }
9569 break;
9570
9571 case tt_typedef_type:
9572 res = tree_node ();
9573 if (res)
9574 {
9575 dump (dumper::TREE)
9576 && dump ("Read %stypedef %C:%N",
9577 DECL_IMPLICIT_TYPEDEF_P (res) ? "implicit " : "",
9578 TREE_CODE (res), res);
9579 res = TREE_TYPE (res);
9580 }
9581 break;
9582
9583 case tt_derived_type:
9584 /* A type derived from some other type. */
9585 {
9586 enum tree_code code = tree_code (u ());
9587 res = tree_node ();
9588
9589 switch (code)
9590 {
9591 default:
9592 set_overrun ();
9593 break;
9594
9595 case ARRAY_TYPE:
9596 {
9597 tree domain = tree_node ();
9598 int dep = u ();
9599 if (!get_overrun ())
9600 res = build_cplus_array_type (res, domain, is_dep: dep);
9601 }
9602 break;
9603
9604 case COMPLEX_TYPE:
9605 if (!get_overrun ())
9606 res = build_complex_type (res);
9607 break;
9608
9609 case BOOLEAN_TYPE:
9610 {
9611 unsigned precision = u ();
9612 if (!get_overrun ())
9613 res = build_nonstandard_boolean_type (precision);
9614 }
9615 break;
9616
9617 case INTEGER_TYPE:
9618 if (res)
9619 {
9620 /* A range type (representing an array domain). */
9621 tree min = tree_node ();
9622 tree max = tree_node ();
9623
9624 if (!get_overrun ())
9625 res = build_range_type (res, min, max);
9626 }
9627 else
9628 {
9629 /* A new integral type (representing a bitfield). */
9630 unsigned enc = u ();
9631 if (!get_overrun ())
9632 res = build_nonstandard_integer_type (enc >> 1, enc & 1);
9633 }
9634 break;
9635
9636 case FUNCTION_TYPE:
9637 case METHOD_TYPE:
9638 {
9639 tree klass = code == METHOD_TYPE ? tree_node () : NULL_TREE;
9640 tree args = tree_node ();
9641 if (!get_overrun ())
9642 {
9643 if (klass)
9644 res = build_method_type_directly (klass, res, args);
9645 else
9646 res = build_function_type (res, args);
9647 }
9648 }
9649 break;
9650
9651 case OFFSET_TYPE:
9652 {
9653 tree base = tree_node ();
9654 if (!get_overrun ())
9655 res = build_offset_type (base, res);
9656 }
9657 break;
9658
9659 case POINTER_TYPE:
9660 if (!get_overrun ())
9661 res = build_pointer_type (res);
9662 break;
9663
9664 case REFERENCE_TYPE:
9665 {
9666 bool rval = bool (u ());
9667 if (!get_overrun ())
9668 res = cp_build_reference_type (res, rval);
9669 }
9670 break;
9671
9672 case DECLTYPE_TYPE:
9673 case TYPEOF_TYPE:
9674 case DEPENDENT_OPERATOR_TYPE:
9675 {
9676 tree expr = tree_node ();
9677 if (!get_overrun ())
9678 {
9679 res = cxx_make_type (code);
9680 TYPE_VALUES_RAW (res) = expr;
9681 if (code == DECLTYPE_TYPE)
9682 tree_node_bools (t: res);
9683 SET_TYPE_STRUCTURAL_EQUALITY (res);
9684 }
9685 }
9686 break;
9687
9688 case TRAIT_TYPE:
9689 {
9690 tree kind = tree_node ();
9691 tree type1 = tree_node ();
9692 tree type2 = tree_node ();
9693 if (!get_overrun ())
9694 {
9695 res = cxx_make_type (TRAIT_TYPE);
9696 TRAIT_TYPE_KIND_RAW (res) = kind;
9697 TRAIT_TYPE_TYPE1 (res) = type1;
9698 TRAIT_TYPE_TYPE2 (res) = type2;
9699 SET_TYPE_STRUCTURAL_EQUALITY (res);
9700 }
9701 }
9702 break;
9703
9704 case TYPE_ARGUMENT_PACK:
9705 if (!get_overrun ())
9706 {
9707 tree pack = cxx_make_type (TYPE_ARGUMENT_PACK);
9708 ARGUMENT_PACK_ARGS (pack) = res;
9709 res = pack;
9710 }
9711 break;
9712
9713 case TYPE_PACK_EXPANSION:
9714 {
9715 bool local = u ();
9716 tree param_packs = tree_node ();
9717 tree extra_args = tree_node ();
9718 if (!get_overrun ())
9719 {
9720 tree expn = cxx_make_type (TYPE_PACK_EXPANSION);
9721 SET_TYPE_STRUCTURAL_EQUALITY (expn);
9722 PACK_EXPANSION_PATTERN (expn) = res;
9723 PACK_EXPANSION_PARAMETER_PACKS (expn) = param_packs;
9724 PACK_EXPANSION_EXTRA_ARGS (expn) = extra_args;
9725 PACK_EXPANSION_LOCAL_P (expn) = local;
9726 res = expn;
9727 }
9728 }
9729 break;
9730
9731 case TYPENAME_TYPE:
9732 {
9733 tree ctx = tree_node ();
9734 tree name = tree_node ();
9735 tree fullname = tree_node ();
9736 enum tag_types tag_type = tag_types (u ());
9737
9738 if (!get_overrun ())
9739 res = build_typename_type (ctx, name, fullname, tag_type);
9740 }
9741 break;
9742
9743 case UNBOUND_CLASS_TEMPLATE:
9744 {
9745 tree ctx = tree_node ();
9746 tree name = tree_node ();
9747 tree parms = tree_node ();
9748
9749 if (!get_overrun ())
9750 res = make_unbound_class_template_raw (ctx, name, parms);
9751 }
9752 break;
9753
9754 case VECTOR_TYPE:
9755 {
9756 poly_uint64 nunits;
9757 for (unsigned ix = 0; ix != NUM_POLY_INT_COEFFS; ix++)
9758 nunits.coeffs[ix] = wu ();
9759 if (!get_overrun ())
9760 res = build_vector_type (res, nunits);
9761 }
9762 break;
9763 }
9764
9765 int tag = i ();
9766 if (!tag)
9767 {
9768 tag = insert (t: res);
9769 if (res)
9770 dump (dumper::TREE)
9771 && dump ("Created:%d derived type %C", tag, code);
9772 }
9773 else
9774 res = back_ref (tag);
9775 }
9776 break;
9777
9778 case tt_variant_type:
9779 /* Variant of some type. */
9780 {
9781 res = tree_node ();
9782 int flags = i ();
9783 if (get_overrun ())
9784 ;
9785 else if (flags < 0)
9786 /* No change. */;
9787 else if (TREE_CODE (res) == FUNCTION_TYPE
9788 || TREE_CODE (res) == METHOD_TYPE)
9789 {
9790 cp_ref_qualifier rqual = cp_ref_qualifier (flags & 3);
9791 bool late = (flags >> 2) & 1;
9792 cp_cv_quals quals = cp_cv_quals (flags >> 3);
9793
9794 tree raises = tree_node ();
9795 if (raises == error_mark_node)
9796 raises = TYPE_RAISES_EXCEPTIONS (res);
9797
9798 res = build_cp_fntype_variant (res, rqual, raises, late);
9799 if (TREE_CODE (res) == FUNCTION_TYPE)
9800 res = apply_memfn_quals (res, quals, rqual);
9801 }
9802 else
9803 {
9804 res = build_aligned_type (res, (1u << flags) >> 1);
9805 TYPE_USER_ALIGN (res) = true;
9806 }
9807
9808 if (tree attribs = tree_node ())
9809 res = cp_build_type_attribute_variant (res, attribs);
9810
9811 int quals = i ();
9812 if (quals >= 0 && !get_overrun ())
9813 res = cp_build_qualified_type (res, quals);
9814
9815 int tag = i ();
9816 if (!tag)
9817 {
9818 tag = insert (t: res);
9819 if (res)
9820 dump (dumper::TREE)
9821 && dump ("Created:%d variant type %C", tag, TREE_CODE (res));
9822 }
9823 else
9824 res = back_ref (tag);
9825 }
9826 break;
9827
9828 case tt_tinfo_var:
9829 case tt_tinfo_typedef:
9830 /* A tinfo var or typedef. */
9831 {
9832 bool is_var = tag == tt_tinfo_var;
9833 unsigned ix = u ();
9834 tree type = NULL_TREE;
9835
9836 if (is_var)
9837 {
9838 tree name = tree_node ();
9839 type = tree_node ();
9840
9841 if (!get_overrun ())
9842 res = get_tinfo_decl_direct (type, name, int (ix));
9843 }
9844 else
9845 {
9846 if (!get_overrun ())
9847 {
9848 type = get_pseudo_tinfo_type (ix);
9849 res = TYPE_NAME (type);
9850 }
9851 }
9852 if (res)
9853 {
9854 int tag = insert (t: res);
9855 dump (dumper::TREE)
9856 && dump ("Created tinfo_%s:%d %S:%u for %N",
9857 is_var ? "var" : "decl", tag, res, ix, type);
9858 if (!is_var)
9859 {
9860 tag = insert (t: type);
9861 dump (dumper::TREE)
9862 && dump ("Created tinfo_type:%d %u %N", tag, ix, type);
9863 }
9864 }
9865 }
9866 break;
9867
9868 case tt_ptrmem_type:
9869 /* A pointer to member function. */
9870 {
9871 tree type = tree_node ();
9872 if (type && TREE_CODE (type) == POINTER_TYPE
9873 && TREE_CODE (TREE_TYPE (type)) == METHOD_TYPE)
9874 {
9875 res = build_ptrmemfunc_type (type);
9876 int tag = insert (t: res);
9877 dump (dumper::TREE) && dump ("Created:%d ptrmem type", tag);
9878 }
9879 else
9880 set_overrun ();
9881 }
9882 break;
9883
9884 case tt_nttp_var:
9885 /* An NTTP object. */
9886 {
9887 tree init = tree_node ();
9888 tree name = tree_node ();
9889 if (!get_overrun ())
9890 {
9891 res = get_template_parm_object (expr: init, mangle: name);
9892 int tag = insert (t: res);
9893 dump (dumper::TREE)
9894 && dump ("Created nttp object:%d %N", tag, name);
9895 }
9896 }
9897 break;
9898
9899 case tt_enum_value:
9900 /* An enum const value. */
9901 {
9902 if (tree decl = tree_node ())
9903 {
9904 dump (dumper::TREE) && dump ("Read enum value %N", decl);
9905 res = DECL_INITIAL (decl);
9906 }
9907
9908 if (!res)
9909 set_overrun ();
9910 }
9911 break;
9912
9913 case tt_enum_decl:
9914 /* An enum decl. */
9915 {
9916 tree ctx = tree_node ();
9917 tree name = tree_node ();
9918
9919 if (!get_overrun ()
9920 && TREE_CODE (ctx) == ENUMERAL_TYPE)
9921 res = find_enum_member (ctx, name);
9922
9923 if (!res)
9924 set_overrun ();
9925 else
9926 {
9927 int tag = insert (t: res);
9928 dump (dumper::TREE)
9929 && dump ("Read enum decl:%d %C:%N", tag, TREE_CODE (res), res);
9930 }
9931 }
9932 break;
9933
9934 case tt_data_member:
9935 /* A data member. */
9936 {
9937 tree ctx = tree_node ();
9938 tree name = tree_node ();
9939
9940 if (!get_overrun ()
9941 && RECORD_OR_UNION_TYPE_P (ctx))
9942 {
9943 if (name)
9944 res = lookup_class_binding (ctx, name);
9945 else
9946 res = lookup_field_ident (ctx, ix: u ());
9947
9948 if (!res
9949 || TREE_CODE (res) != FIELD_DECL
9950 || DECL_CONTEXT (res) != ctx)
9951 res = NULL_TREE;
9952 }
9953
9954 if (!res)
9955 set_overrun ();
9956 else
9957 {
9958 int tag = insert (t: res);
9959 dump (dumper::TREE)
9960 && dump ("Read member:%d %C:%N", tag, TREE_CODE (res), res);
9961 }
9962 }
9963 break;
9964
9965 case tt_binfo:
9966 /* A BINFO. Walk the tree of the dominating type. */
9967 {
9968 tree type;
9969 unsigned ix = binfo_mergeable (&type);
9970 if (type)
9971 {
9972 res = TYPE_BINFO (type);
9973 for (; ix && res; res = TREE_CHAIN (res))
9974 ix--;
9975 if (!res)
9976 set_overrun ();
9977 }
9978
9979 if (get_overrun ())
9980 break;
9981
9982 /* Insert binfo into backreferences. */
9983 tag = insert (t: res);
9984 dump (dumper::TREE) && dump ("Read binfo:%d %N", tag, res);
9985 }
9986 break;
9987
9988 case tt_vtable:
9989 {
9990 unsigned ix = u ();
9991 tree ctx = tree_node ();
9992 dump (dumper::TREE) && dump ("Reading vtable %N[%u]", ctx, ix);
9993 if (TREE_CODE (ctx) == RECORD_TYPE && TYPE_LANG_SPECIFIC (ctx))
9994 for (res = CLASSTYPE_VTABLES (ctx); res; res = DECL_CHAIN (res))
9995 if (!ix--)
9996 break;
9997 if (!res)
9998 set_overrun ();
9999 }
10000 break;
10001
10002 case tt_thunk:
10003 {
10004 int fixed = i ();
10005 tree target = tree_node ();
10006 tree virt = tree_node ();
10007
10008 for (tree thunk = DECL_THUNKS (target);
10009 thunk; thunk = DECL_CHAIN (thunk))
10010 if (THUNK_FIXED_OFFSET (thunk) == fixed
10011 && !THUNK_VIRTUAL_OFFSET (thunk) == !virt
10012 && (!virt
10013 || tree_int_cst_equal (virt, THUNK_VIRTUAL_OFFSET (thunk))))
10014 {
10015 res = thunk;
10016 break;
10017 }
10018
10019 int tag = insert (t: res);
10020 if (res)
10021 dump (dumper::TREE)
10022 && dump ("Read:%d thunk %N to %N", tag, DECL_NAME (res), target);
10023 else
10024 set_overrun ();
10025 }
10026 break;
10027
10028 case tt_clone_ref:
10029 {
10030 tree target = tree_node ();
10031 tree name = tree_node ();
10032
10033 if (DECL_P (target) && DECL_MAYBE_IN_CHARGE_CDTOR_P (target))
10034 {
10035 tree clone;
10036 FOR_EVERY_CLONE (clone, target)
10037 if (DECL_NAME (clone) == name)
10038 {
10039 res = clone;
10040 break;
10041 }
10042 }
10043
10044 /* A clone might have a different vtable entry. */
10045 if (res && DECL_VIRTUAL_P (res))
10046 DECL_VINDEX (res) = tree_node ();
10047
10048 if (!res)
10049 set_overrun ();
10050 int tag = insert (t: res);
10051 if (res)
10052 dump (dumper::TREE)
10053 && dump ("Read:%d clone %N of %N", tag, DECL_NAME (res), target);
10054 else
10055 set_overrun ();
10056 }
10057 break;
10058
10059 case tt_entity:
10060 /* Index into the entity table. Perhaps not loaded yet! */
10061 {
10062 unsigned origin = state->slurp->remap_module (owner: u ());
10063 unsigned ident = u ();
10064 module_state *from = (*modules)[origin];
10065
10066 if (!origin || ident >= from->entity_num)
10067 set_overrun ();
10068 if (!get_overrun ())
10069 {
10070 binding_slot *slot = &(*entity_ary)[from->entity_lwm + ident];
10071 if (slot->is_lazy ())
10072 if (!from->lazy_load (index: ident, mslot: slot))
10073 set_overrun ();
10074 res = *slot;
10075 }
10076
10077 if (res)
10078 {
10079 const char *kind = (origin != state->mod ? "Imported" : "Named");
10080 int tag = insert (t: res);
10081 dump (dumper::TREE)
10082 && dump ("%s:%d %C:%N@%M", kind, tag, TREE_CODE (res),
10083 res, (*modules)[origin]);
10084
10085 if (!add_indirects (decl: res))
10086 {
10087 set_overrun ();
10088 res = NULL_TREE;
10089 }
10090 }
10091 }
10092 break;
10093
10094 case tt_template:
10095 /* A template. */
10096 if (tree tpl = tree_node ())
10097 {
10098 res = DECL_TEMPLATE_RESULT (tpl);
10099 dump (dumper::TREE)
10100 && dump ("Read template %C:%N", TREE_CODE (res), res);
10101 }
10102 break;
10103 }
10104
10105 if (is_use && !unused && res && DECL_P (res) && !TREE_USED (res))
10106 {
10107 /* Mark decl used as mark_used does -- we cannot call
10108 mark_used in the middle of streaming, we only need a subset
10109 of its functionality. */
10110 TREE_USED (res) = true;
10111
10112 /* And for structured bindings also the underlying decl. */
10113 if (DECL_DECOMPOSITION_P (res) && DECL_DECOMP_BASE (res))
10114 TREE_USED (DECL_DECOMP_BASE (res)) = true;
10115
10116 if (DECL_CLONED_FUNCTION_P (res))
10117 TREE_USED (DECL_CLONED_FUNCTION (res)) = true;
10118 }
10119
10120 dump.outdent ();
10121 return res;
10122}
10123
10124void
10125trees_out::tpl_parms (tree parms, unsigned &tpl_levels)
10126{
10127 if (!parms)
10128 return;
10129
10130 if (TREE_VISITED (parms))
10131 {
10132 ref_node (t: parms);
10133 return;
10134 }
10135
10136 tpl_parms (TREE_CHAIN (parms), tpl_levels);
10137
10138 tree vec = TREE_VALUE (parms);
10139 unsigned len = TREE_VEC_LENGTH (vec);
10140 /* Depth. */
10141 int tag = insert (t: parms);
10142 if (streaming_p ())
10143 {
10144 i (v: len + 1);
10145 dump (dumper::TREE)
10146 && dump ("Writing template parms:%d level:%N length:%d",
10147 tag, TREE_PURPOSE (parms), len);
10148 }
10149 tree_node (TREE_PURPOSE (parms));
10150
10151 for (unsigned ix = 0; ix != len; ix++)
10152 {
10153 tree parm = TREE_VEC_ELT (vec, ix);
10154 tree decl = TREE_VALUE (parm);
10155
10156 gcc_checking_assert (DECL_TEMPLATE_PARM_P (decl));
10157 if (CHECKING_P)
10158 switch (TREE_CODE (decl))
10159 {
10160 default: gcc_unreachable ();
10161
10162 case TEMPLATE_DECL:
10163 gcc_assert ((TREE_CODE (TREE_TYPE (decl)) == TEMPLATE_TEMPLATE_PARM)
10164 && (TREE_CODE (DECL_TEMPLATE_RESULT (decl)) == TYPE_DECL)
10165 && (TYPE_NAME (TREE_TYPE (decl)) == decl));
10166 break;
10167
10168 case TYPE_DECL:
10169 gcc_assert ((TREE_CODE (TREE_TYPE (decl)) == TEMPLATE_TYPE_PARM)
10170 && (TYPE_NAME (TREE_TYPE (decl)) == decl));
10171 break;
10172
10173 case PARM_DECL:
10174 gcc_assert ((TREE_CODE (DECL_INITIAL (decl)) == TEMPLATE_PARM_INDEX)
10175 && (TREE_CODE (TEMPLATE_PARM_DECL (DECL_INITIAL (decl)))
10176 == CONST_DECL)
10177 && (DECL_TEMPLATE_PARM_P
10178 (TEMPLATE_PARM_DECL (DECL_INITIAL (decl)))));
10179 break;
10180 }
10181
10182 tree_node (t: decl);
10183 tree_node (TEMPLATE_PARM_CONSTRAINTS (parm));
10184 }
10185
10186 tpl_levels++;
10187}
10188
10189tree
10190trees_in::tpl_parms (unsigned &tpl_levels)
10191{
10192 tree parms = NULL_TREE;
10193
10194 while (int len = i ())
10195 {
10196 if (len < 0)
10197 {
10198 parms = back_ref (tag: len);
10199 continue;
10200 }
10201
10202 len -= 1;
10203 parms = tree_cons (NULL_TREE, NULL_TREE, parms);
10204 int tag = insert (t: parms);
10205 TREE_PURPOSE (parms) = tree_node ();
10206
10207 dump (dumper::TREE)
10208 && dump ("Reading template parms:%d level:%N length:%d",
10209 tag, TREE_PURPOSE (parms), len);
10210
10211 tree vec = make_tree_vec (len);
10212 for (int ix = 0; ix != len; ix++)
10213 {
10214 tree decl = tree_node ();
10215 if (!decl)
10216 return NULL_TREE;
10217
10218 tree parm = build_tree_list (NULL, decl);
10219 TEMPLATE_PARM_CONSTRAINTS (parm) = tree_node ();
10220
10221 TREE_VEC_ELT (vec, ix) = parm;
10222 }
10223
10224 TREE_VALUE (parms) = vec;
10225 tpl_levels++;
10226 }
10227
10228 return parms;
10229}
10230
10231void
10232trees_out::tpl_parms_fini (tree tmpl, unsigned tpl_levels)
10233{
10234 for (tree parms = DECL_TEMPLATE_PARMS (tmpl);
10235 tpl_levels--; parms = TREE_CHAIN (parms))
10236 {
10237 tree vec = TREE_VALUE (parms);
10238
10239 tree_node (TREE_TYPE (vec));
10240 for (unsigned ix = TREE_VEC_LENGTH (vec); ix--;)
10241 {
10242 tree parm = TREE_VEC_ELT (vec, ix);
10243 tree dflt = TREE_PURPOSE (parm);
10244 tree_node (t: dflt);
10245
10246 /* Template template parameters need a context of their owning
10247 template. This is quite tricky to infer correctly on stream-in
10248 (see PR c++/98881) so we'll just provide it directly. */
10249 tree decl = TREE_VALUE (parm);
10250 if (TREE_CODE (decl) == TEMPLATE_DECL)
10251 tree_node (DECL_CONTEXT (decl));
10252 }
10253 }
10254}
10255
10256bool
10257trees_in::tpl_parms_fini (tree tmpl, unsigned tpl_levels)
10258{
10259 for (tree parms = DECL_TEMPLATE_PARMS (tmpl);
10260 tpl_levels--; parms = TREE_CHAIN (parms))
10261 {
10262 tree vec = TREE_VALUE (parms);
10263
10264 TREE_TYPE (vec) = tree_node ();
10265 for (unsigned ix = TREE_VEC_LENGTH (vec); ix--;)
10266 {
10267 tree parm = TREE_VEC_ELT (vec, ix);
10268 tree dflt = tree_node ();
10269 TREE_PURPOSE (parm) = dflt;
10270
10271 tree decl = TREE_VALUE (parm);
10272 if (TREE_CODE (decl) == TEMPLATE_DECL)
10273 DECL_CONTEXT (decl) = tree_node ();
10274
10275 if (get_overrun ())
10276 return false;
10277 }
10278 }
10279 return true;
10280}
10281
10282/* PARMS is a LIST, one node per level.
10283 TREE_VALUE is a TREE_VEC of parm info for that level.
10284 each ELT is a TREE_LIST
10285 TREE_VALUE is PARM_DECL, TYPE_DECL or TEMPLATE_DECL
10286 TREE_PURPOSE is the default value. */
10287
10288void
10289trees_out::tpl_header (tree tpl, unsigned *tpl_levels)
10290{
10291 tree parms = DECL_TEMPLATE_PARMS (tpl);
10292 tpl_parms (parms, tpl_levels&: *tpl_levels);
10293
10294 /* Mark end. */
10295 if (streaming_p ())
10296 u (v: 0);
10297
10298 if (*tpl_levels)
10299 tree_node (TEMPLATE_PARMS_CONSTRAINTS (parms));
10300}
10301
10302bool
10303trees_in::tpl_header (tree tpl, unsigned *tpl_levels)
10304{
10305 tree parms = tpl_parms (tpl_levels&: *tpl_levels);
10306 if (!parms)
10307 return false;
10308
10309 DECL_TEMPLATE_PARMS (tpl) = parms;
10310
10311 if (*tpl_levels)
10312 TEMPLATE_PARMS_CONSTRAINTS (parms) = tree_node ();
10313
10314 return true;
10315}
10316
10317/* Stream skeleton parm nodes, with their flags, type & parm indices.
10318 All the parms will have consecutive tags. */
10319
10320void
10321trees_out::fn_parms_init (tree fn)
10322{
10323 /* First init them. */
10324 int base_tag = ref_num - 1;
10325 int ix = 0;
10326 for (tree parm = DECL_ARGUMENTS (fn);
10327 parm; parm = DECL_CHAIN (parm), ix++)
10328 {
10329 if (streaming_p ())
10330 {
10331 start (t: parm);
10332 tree_node_bools (t: parm);
10333 }
10334 int tag = insert (t: parm);
10335 gcc_checking_assert (base_tag - ix == tag);
10336 }
10337 /* Mark the end. */
10338 if (streaming_p ())
10339 u (v: 0);
10340
10341 /* Now stream their contents. */
10342 ix = 0;
10343 for (tree parm = DECL_ARGUMENTS (fn);
10344 parm; parm = DECL_CHAIN (parm), ix++)
10345 {
10346 if (streaming_p ())
10347 dump (dumper::TREE)
10348 && dump ("Writing parm:%d %u (%N) of %N",
10349 base_tag - ix, ix, parm, fn);
10350 tree_node_vals (t: parm);
10351 }
10352
10353 if (!streaming_p ())
10354 {
10355 /* We must walk contract attrs so the dependency graph is complete. */
10356 for (tree contract = DECL_CONTRACTS (fn);
10357 contract;
10358 contract = CONTRACT_CHAIN (contract))
10359 tree_node (t: contract);
10360 }
10361
10362 /* Write a reference to contracts pre/post functions, if any, to avoid
10363 regenerating them in importers. */
10364 tree_node (DECL_PRE_FN (fn));
10365 tree_node (DECL_POST_FN (fn));
10366}
10367
10368/* Build skeleton parm nodes, read their flags, type & parm indices. */
10369
10370int
10371trees_in::fn_parms_init (tree fn)
10372{
10373 int base_tag = ~(int)back_refs.length ();
10374
10375 tree *parm_ptr = &DECL_ARGUMENTS (fn);
10376 int ix = 0;
10377 for (; int code = u (); ix++)
10378 {
10379 tree parm = start (code);
10380 if (!tree_node_bools (t: parm))
10381 return 0;
10382
10383 int tag = insert (t: parm);
10384 gcc_checking_assert (base_tag - ix == tag);
10385 *parm_ptr = parm;
10386 parm_ptr = &DECL_CHAIN (parm);
10387 }
10388
10389 ix = 0;
10390 for (tree parm = DECL_ARGUMENTS (fn);
10391 parm; parm = DECL_CHAIN (parm), ix++)
10392 {
10393 dump (dumper::TREE)
10394 && dump ("Reading parm:%d %u (%N) of %N",
10395 base_tag - ix, ix, parm, fn);
10396 if (!tree_node_vals (t: parm))
10397 return 0;
10398 }
10399
10400 /* Reload references to contract functions, if any. */
10401 tree pre_fn = tree_node ();
10402 tree post_fn = tree_node ();
10403 set_contract_functions (fn, pre_fn, post_fn);
10404
10405 return base_tag;
10406}
10407
10408/* Read the remaining parm node data. Replace with existing (if
10409 non-null) in the map. */
10410
10411void
10412trees_in::fn_parms_fini (int tag, tree fn, tree existing, bool is_defn)
10413{
10414 tree existing_parm = existing ? DECL_ARGUMENTS (existing) : NULL_TREE;
10415 tree parms = DECL_ARGUMENTS (fn);
10416 unsigned ix = 0;
10417 for (tree parm = parms; parm; parm = DECL_CHAIN (parm), ix++)
10418 {
10419 if (existing_parm)
10420 {
10421 if (is_defn && !DECL_SAVED_TREE (existing))
10422 {
10423 /* If we're about to become the definition, set the
10424 names of the parms from us. */
10425 DECL_NAME (existing_parm) = DECL_NAME (parm);
10426 DECL_SOURCE_LOCATION (existing_parm) = DECL_SOURCE_LOCATION (parm);
10427 }
10428
10429 back_refs[~tag] = existing_parm;
10430 existing_parm = DECL_CHAIN (existing_parm);
10431 }
10432 tag--;
10433 }
10434}
10435
10436/* Encode into KEY the position of the local type (class or enum)
10437 declaration DECL within FN. The position is encoded as the
10438 index of the innermost BLOCK (numbered in BFS order) along with
10439 the index within its BLOCK_VARS list. */
10440
10441void
10442trees_out::key_local_type (merge_key& key, tree decl, tree fn)
10443{
10444 auto_vec<tree, 4> blocks;
10445 blocks.quick_push (DECL_INITIAL (fn));
10446 unsigned block_ix = 0;
10447 while (block_ix != blocks.length ())
10448 {
10449 tree block = blocks[block_ix];
10450 unsigned decl_ix = 0;
10451 for (tree var = BLOCK_VARS (block); var; var = DECL_CHAIN (var))
10452 {
10453 if (TREE_CODE (var) != TYPE_DECL)
10454 continue;
10455 if (var == decl)
10456 {
10457 key.index = (block_ix << 10) | decl_ix;
10458 return;
10459 }
10460 ++decl_ix;
10461 }
10462 for (tree sub = BLOCK_SUBBLOCKS (block); sub; sub = BLOCK_CHAIN (sub))
10463 blocks.safe_push (obj: sub);
10464 ++block_ix;
10465 }
10466
10467 /* Not-found value. */
10468 key.index = 1023;
10469}
10470
10471/* Look up the local type corresponding at the position encoded by
10472 KEY within FN and named NAME. */
10473
10474tree
10475trees_in::key_local_type (const merge_key& key, tree fn, tree name)
10476{
10477 if (!DECL_INITIAL (fn))
10478 return NULL_TREE;
10479
10480 const unsigned block_pos = key.index >> 10;
10481 const unsigned decl_pos = key.index & 1023;
10482
10483 if (decl_pos == 1023)
10484 return NULL_TREE;
10485
10486 auto_vec<tree, 4> blocks;
10487 blocks.quick_push (DECL_INITIAL (fn));
10488 unsigned block_ix = 0;
10489 while (block_ix != blocks.length ())
10490 {
10491 tree block = blocks[block_ix];
10492 if (block_ix == block_pos)
10493 {
10494 unsigned decl_ix = 0;
10495 for (tree var = BLOCK_VARS (block); var; var = DECL_CHAIN (var))
10496 {
10497 if (TREE_CODE (var) != TYPE_DECL)
10498 continue;
10499 /* Prefer using the identifier as the key for more robustness
10500 to ODR violations, except for anonymous types since their
10501 compiler-generated identifiers aren't stable. */
10502 if (IDENTIFIER_ANON_P (name)
10503 ? decl_ix == decl_pos
10504 : DECL_NAME (var) == name)
10505 return var;
10506 ++decl_ix;
10507 }
10508 return NULL_TREE;
10509 }
10510 for (tree sub = BLOCK_SUBBLOCKS (block); sub; sub = BLOCK_CHAIN (sub))
10511 blocks.safe_push (obj: sub);
10512 ++block_ix;
10513 }
10514
10515 return NULL_TREE;
10516}
10517
10518/* DEP is the depset of some decl we're streaming by value. Determine
10519 the merging behaviour. */
10520
10521merge_kind
10522trees_out::get_merge_kind (tree decl, depset *dep)
10523{
10524 if (!dep)
10525 {
10526 if (VAR_OR_FUNCTION_DECL_P (decl))
10527 {
10528 /* Any var or function with template info should have DEP. */
10529 gcc_checking_assert (!DECL_LANG_SPECIFIC (decl)
10530 || !DECL_TEMPLATE_INFO (decl));
10531 if (DECL_LOCAL_DECL_P (decl))
10532 return MK_unique;
10533 }
10534
10535 /* Either unique, or some member of a class that cannot have an
10536 out-of-class definition. For instance a FIELD_DECL. */
10537 tree ctx = CP_DECL_CONTEXT (decl);
10538 if (TREE_CODE (ctx) == FUNCTION_DECL)
10539 {
10540 /* USING_DECLs and NAMESPACE_DECLs cannot have DECL_TEMPLATE_INFO --
10541 this isn't permitting them to have one. */
10542 gcc_checking_assert (TREE_CODE (decl) == USING_DECL
10543 || TREE_CODE (decl) == NAMESPACE_DECL
10544 || !DECL_LANG_SPECIFIC (decl)
10545 || !DECL_TEMPLATE_INFO (decl));
10546
10547 return MK_unique;
10548 }
10549
10550 if (TREE_CODE (decl) == TEMPLATE_DECL
10551 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10552 return MK_local_friend;
10553
10554 gcc_checking_assert (TYPE_P (ctx));
10555 if (TREE_CODE (decl) == USING_DECL)
10556 return MK_field;
10557
10558 if (TREE_CODE (decl) == FIELD_DECL)
10559 {
10560 if (DECL_NAME (decl))
10561 {
10562 /* Anonymous FIELD_DECLs have a NULL name. */
10563 gcc_checking_assert (!IDENTIFIER_ANON_P (DECL_NAME (decl)));
10564 return MK_named;
10565 }
10566
10567 if (!DECL_NAME (decl)
10568 && !RECORD_OR_UNION_TYPE_P (TREE_TYPE (decl))
10569 && !DECL_BIT_FIELD_REPRESENTATIVE (decl))
10570 {
10571 /* The underlying storage unit for a bitfield. We do not
10572 need to dedup it, because it's only reachable through
10573 the bitfields it represents. And those are deduped. */
10574 // FIXME: Is that assertion correct -- do we ever fish it
10575 // out and put it in an expr?
10576 gcc_checking_assert ((TREE_CODE (TREE_TYPE (decl)) == ARRAY_TYPE
10577 ? TREE_CODE (TREE_TYPE (TREE_TYPE (decl)))
10578 : TREE_CODE (TREE_TYPE (decl)))
10579 == INTEGER_TYPE);
10580 return MK_unique;
10581 }
10582
10583 return MK_field;
10584 }
10585
10586 if (TREE_CODE (decl) == CONST_DECL)
10587 return MK_named;
10588
10589 if (TREE_CODE (decl) == VAR_DECL
10590 && DECL_VTABLE_OR_VTT_P (decl))
10591 return MK_vtable;
10592
10593 if (DECL_THUNK_P (decl))
10594 /* Thunks are unique-enough, because they're only referenced
10595 from the vtable. And that's either new (so we want the
10596 thunks), or it's a duplicate (so it will be dropped). */
10597 return MK_unique;
10598
10599 /* There should be no other cases. */
10600 gcc_unreachable ();
10601 }
10602
10603 gcc_checking_assert (TREE_CODE (decl) != FIELD_DECL
10604 && TREE_CODE (decl) != USING_DECL
10605 && TREE_CODE (decl) != CONST_DECL);
10606
10607 if (is_key_order ())
10608 {
10609 /* When doing the mergeablilty graph, there's an indirection to
10610 the actual depset. */
10611 gcc_assert (dep->is_special ());
10612 dep = dep->deps[0];
10613 }
10614
10615 gcc_checking_assert (decl == dep->get_entity ());
10616
10617 merge_kind mk = MK_named;
10618 switch (dep->get_entity_kind ())
10619 {
10620 default:
10621 gcc_unreachable ();
10622
10623 case depset::EK_PARTIAL:
10624 mk = MK_partial;
10625 break;
10626
10627 case depset::EK_DECL:
10628 {
10629 tree ctx = CP_DECL_CONTEXT (decl);
10630
10631 switch (TREE_CODE (ctx))
10632 {
10633 default:
10634 gcc_unreachable ();
10635
10636 case FUNCTION_DECL:
10637 gcc_checking_assert
10638 (DECL_IMPLICIT_TYPEDEF_P (STRIP_TEMPLATE (decl)));
10639
10640 mk = MK_local_type;
10641 break;
10642
10643 case RECORD_TYPE:
10644 case UNION_TYPE:
10645 case NAMESPACE_DECL:
10646 if (DECL_NAME (decl) == as_base_identifier)
10647 {
10648 mk = MK_as_base;
10649 break;
10650 }
10651
10652 /* A lambda may have a class as its context, even though it
10653 isn't a member in the traditional sense; see the test
10654 g++.dg/modules/lambda-6_a.C. */
10655 if (DECL_IMPLICIT_TYPEDEF_P (STRIP_TEMPLATE (decl))
10656 && LAMBDA_TYPE_P (TREE_TYPE (decl)))
10657 if (tree scope
10658 = LAMBDA_EXPR_EXTRA_SCOPE (CLASSTYPE_LAMBDA_EXPR
10659 (TREE_TYPE (decl))))
10660 {
10661 /* Lambdas attached to fields are keyed to its class. */
10662 if (TREE_CODE (scope) == FIELD_DECL)
10663 scope = TYPE_NAME (DECL_CONTEXT (scope));
10664 if (DECL_LANG_SPECIFIC (scope)
10665 && DECL_MODULE_KEYED_DECLS_P (scope))
10666 {
10667 mk = MK_keyed;
10668 break;
10669 }
10670 }
10671
10672 if (TREE_CODE (decl) == TEMPLATE_DECL
10673 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10674 {
10675 mk = MK_local_friend;
10676 break;
10677 }
10678
10679 if (IDENTIFIER_ANON_P (DECL_NAME (decl)))
10680 {
10681 if (RECORD_OR_UNION_TYPE_P (ctx))
10682 mk = MK_field;
10683 else if (DECL_IMPLICIT_TYPEDEF_P (decl)
10684 && UNSCOPED_ENUM_P (TREE_TYPE (decl))
10685 && TYPE_VALUES (TREE_TYPE (decl)))
10686 /* Keyed by first enum value, and underlying type. */
10687 mk = MK_enum;
10688 else
10689 /* No way to merge it, it is an ODR land-mine. */
10690 mk = MK_unique;
10691 }
10692 }
10693 }
10694 break;
10695
10696 case depset::EK_SPECIALIZATION:
10697 {
10698 gcc_checking_assert (dep->is_special ());
10699
10700 if (TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL)
10701 /* An block-scope classes of templates are themselves
10702 templates. */
10703 gcc_checking_assert (DECL_IMPLICIT_TYPEDEF_P (decl));
10704
10705 if (dep->is_friend_spec ())
10706 mk = MK_friend_spec;
10707 else if (dep->is_type_spec ())
10708 mk = MK_type_spec;
10709 else
10710 mk = MK_decl_spec;
10711
10712 if (TREE_CODE (decl) == TEMPLATE_DECL)
10713 {
10714 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10715 if (TREE_CODE (entry->spec) != TEMPLATE_DECL)
10716 mk = merge_kind (mk | MK_tmpl_tmpl_mask);
10717 }
10718 }
10719 break;
10720 }
10721
10722 return mk;
10723}
10724
10725
10726/* The container of DECL -- not necessarily its context! */
10727
10728tree
10729trees_out::decl_container (tree decl)
10730{
10731 int use_tpl;
10732 tree tpl = NULL_TREE;
10733 if (tree template_info = node_template_info (decl, use&: use_tpl))
10734 tpl = TI_TEMPLATE (template_info);
10735 if (tpl == decl)
10736 tpl = nullptr;
10737
10738 /* Stream the template we're instantiated from. */
10739 tree_node (t: tpl);
10740
10741 tree container = NULL_TREE;
10742 if (TREE_CODE (decl) == TEMPLATE_DECL
10743 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
10744 container = DECL_CHAIN (decl);
10745 else
10746 container = CP_DECL_CONTEXT (decl);
10747
10748 if (TYPE_P (container))
10749 container = TYPE_NAME (container);
10750
10751 tree_node (t: container);
10752
10753 return container;
10754}
10755
10756tree
10757trees_in::decl_container ()
10758{
10759 /* The maybe-template. */
10760 (void)tree_node ();
10761
10762 tree container = tree_node ();
10763
10764 return container;
10765}
10766
10767/* Write out key information about a mergeable DEP. Does not write
10768 the contents of DEP itself. The context has already been
10769 written. The container has already been streamed. */
10770
10771void
10772trees_out::key_mergeable (int tag, merge_kind mk, tree decl, tree inner,
10773 tree container, depset *dep)
10774{
10775 if (dep && is_key_order ())
10776 {
10777 gcc_checking_assert (dep->is_special ());
10778 dep = dep->deps[0];
10779 }
10780
10781 if (streaming_p ())
10782 dump (dumper::MERGE)
10783 && dump ("Writing:%d's %s merge key (%s) %C:%N", tag, merge_kind_name[mk],
10784 dep ? dep->entity_kind_name () : "contained",
10785 TREE_CODE (decl), decl);
10786
10787 /* Now write the locating information. */
10788 if (mk & MK_template_mask)
10789 {
10790 /* Specializations are located via their originating template,
10791 and the set of template args they specialize. */
10792 gcc_checking_assert (dep && dep->is_special ());
10793 spec_entry *entry = reinterpret_cast <spec_entry *> (dep->deps[0]);
10794
10795 tree_node (t: entry->tmpl);
10796 tree_node (t: entry->args);
10797 if (mk & MK_tmpl_decl_mask)
10798 if (flag_concepts && TREE_CODE (inner) == VAR_DECL)
10799 {
10800 /* Variable template partial specializations might need
10801 constraints (see spec_hasher::equal). It's simpler to
10802 write NULL when we don't need them. */
10803 tree constraints = NULL_TREE;
10804
10805 if (uses_template_parms (entry->args))
10806 constraints = get_constraints (inner);
10807 tree_node (t: constraints);
10808 }
10809
10810 if (CHECKING_P)
10811 {
10812 /* Make sure we can locate the decl. */
10813 tree existing = match_mergeable_specialization
10814 (is_decl: bool (mk & MK_tmpl_decl_mask), entry);
10815
10816 gcc_assert (existing);
10817 if (mk & MK_tmpl_decl_mask)
10818 {
10819 if (mk & MK_tmpl_tmpl_mask)
10820 existing = DECL_TI_TEMPLATE (existing);
10821 }
10822 else
10823 {
10824 if (mk & MK_tmpl_tmpl_mask)
10825 existing = CLASSTYPE_TI_TEMPLATE (existing);
10826 else
10827 existing = TYPE_NAME (existing);
10828 }
10829
10830 /* The walkabout should have found ourselves. */
10831 gcc_checking_assert (TREE_CODE (decl) == TYPE_DECL
10832 ? same_type_p (TREE_TYPE (decl),
10833 TREE_TYPE (existing))
10834 : existing == decl);
10835 }
10836 }
10837 else if (mk != MK_unique)
10838 {
10839 merge_key key;
10840 tree name = DECL_NAME (decl);
10841
10842 switch (mk)
10843 {
10844 default:
10845 gcc_unreachable ();
10846
10847 case MK_named:
10848 case MK_friend_spec:
10849 if (IDENTIFIER_CONV_OP_P (name))
10850 name = conv_op_identifier;
10851
10852 if (TREE_CODE (inner) == FUNCTION_DECL)
10853 {
10854 /* Functions are distinguished by parameter types. */
10855 tree fn_type = TREE_TYPE (inner);
10856
10857 key.ref_q = type_memfn_rqual (fn_type);
10858 key.args = TYPE_ARG_TYPES (fn_type);
10859
10860 if (tree reqs = get_constraints (inner))
10861 {
10862 if (cxx_dialect < cxx20)
10863 reqs = CI_ASSOCIATED_CONSTRAINTS (reqs);
10864 else
10865 reqs = CI_DECLARATOR_REQS (reqs);
10866 key.constraints = reqs;
10867 }
10868
10869 if (IDENTIFIER_CONV_OP_P (name)
10870 || (decl != inner
10871 && !(name == fun_identifier
10872 /* In case the user names something _FUN */
10873 && LAMBDA_TYPE_P (DECL_CONTEXT (inner)))))
10874 /* And a function template, or conversion operator needs
10875 the return type. Except for the _FUN thunk of a
10876 generic lambda, which has a recursive decl_type'd
10877 return type. */
10878 // FIXME: What if the return type is a voldemort?
10879 key.ret = fndecl_declared_return_type (inner);
10880 }
10881 break;
10882
10883 case MK_field:
10884 {
10885 unsigned ix = 0;
10886 if (TREE_CODE (inner) != FIELD_DECL)
10887 name = NULL_TREE;
10888 else
10889 gcc_checking_assert (!name || !IDENTIFIER_ANON_P (name));
10890
10891 for (tree field = TYPE_FIELDS (TREE_TYPE (container));
10892 ; field = DECL_CHAIN (field))
10893 {
10894 tree finner = STRIP_TEMPLATE (field);
10895 if (TREE_CODE (finner) == TREE_CODE (inner))
10896 {
10897 if (finner == inner)
10898 break;
10899 ix++;
10900 }
10901 }
10902 key.index = ix;
10903 }
10904 break;
10905
10906 case MK_vtable:
10907 {
10908 tree vtable = CLASSTYPE_VTABLES (TREE_TYPE (container));
10909 for (unsigned ix = 0; ; vtable = DECL_CHAIN (vtable), ix++)
10910 if (vtable == decl)
10911 {
10912 key.index = ix;
10913 break;
10914 }
10915 name = NULL_TREE;
10916 }
10917 break;
10918
10919 case MK_as_base:
10920 gcc_checking_assert
10921 (decl == TYPE_NAME (CLASSTYPE_AS_BASE (TREE_TYPE (container))));
10922 break;
10923
10924 case MK_local_friend:
10925 {
10926 /* Find by index on the class's DECL_LIST */
10927 unsigned ix = 0;
10928 for (tree decls = CLASSTYPE_DECL_LIST (TREE_CHAIN (decl));
10929 decls; decls = TREE_CHAIN (decls))
10930 if (!TREE_PURPOSE (decls))
10931 {
10932 tree frnd = friend_from_decl_list (TREE_VALUE (decls));
10933 if (frnd == decl)
10934 break;
10935 ix++;
10936 }
10937 key.index = ix;
10938 name = NULL_TREE;
10939 }
10940 break;
10941
10942 case MK_local_type:
10943 key_local_type (key, STRIP_TEMPLATE (decl), fn: container);
10944 break;
10945
10946 case MK_enum:
10947 {
10948 /* Anonymous enums are located by their first identifier,
10949 and underlying type. */
10950 tree type = TREE_TYPE (decl);
10951
10952 gcc_checking_assert (UNSCOPED_ENUM_P (type));
10953 /* Using the type name drops the bit precision we might
10954 have been using on the enum. */
10955 key.ret = TYPE_NAME (ENUM_UNDERLYING_TYPE (type));
10956 if (tree values = TYPE_VALUES (type))
10957 name = DECL_NAME (TREE_VALUE (values));
10958 }
10959 break;
10960
10961 case MK_keyed:
10962 {
10963 gcc_checking_assert (LAMBDA_TYPE_P (TREE_TYPE (inner)));
10964 tree scope = LAMBDA_EXPR_EXTRA_SCOPE (CLASSTYPE_LAMBDA_EXPR
10965 (TREE_TYPE (inner)));
10966 gcc_checking_assert (TREE_CODE (scope) == VAR_DECL
10967 || TREE_CODE (scope) == FIELD_DECL
10968 || TREE_CODE (scope) == PARM_DECL
10969 || TREE_CODE (scope) == TYPE_DECL);
10970 /* Lambdas attached to fields are keyed to the class. */
10971 if (TREE_CODE (scope) == FIELD_DECL)
10972 scope = TYPE_NAME (DECL_CONTEXT (scope));
10973 auto *root = keyed_table->get (k: scope);
10974 unsigned ix = root->length ();
10975 /* If we don't find it, we'll write a really big number
10976 that the reader will ignore. */
10977 while (ix--)
10978 if ((*root)[ix] == inner)
10979 break;
10980
10981 /* Use the keyed-to decl as the 'name'. */
10982 name = scope;
10983 key.index = ix;
10984 }
10985 break;
10986
10987 case MK_partial:
10988 {
10989 tree ti = get_template_info (inner);
10990 key.constraints = get_constraints (inner);
10991 key.ret = TI_TEMPLATE (ti);
10992 key.args = TI_ARGS (ti);
10993 }
10994 break;
10995 }
10996
10997 tree_node (t: name);
10998 if (streaming_p ())
10999 {
11000 unsigned code = (key.ref_q << 0) | (key.index << 2);
11001 u (v: code);
11002 }
11003
11004 if (mk == MK_enum)
11005 tree_node (t: key.ret);
11006 else if (mk == MK_partial
11007 || (mk == MK_named && inner
11008 && TREE_CODE (inner) == FUNCTION_DECL))
11009 {
11010 tree_node (t: key.ret);
11011 tree arg = key.args;
11012 if (mk == MK_named)
11013 while (arg && arg != void_list_node)
11014 {
11015 tree_node (TREE_VALUE (arg));
11016 arg = TREE_CHAIN (arg);
11017 }
11018 tree_node (t: arg);
11019 tree_node (t: key.constraints);
11020 }
11021 }
11022}
11023
11024/* DECL is a new declaration that may be duplicated in OVL. Use RET &
11025 ARGS to find its clone, or NULL. If DECL's DECL_NAME is NULL, this
11026 has been found by a proxy. It will be an enum type located by its
11027 first member.
11028
11029 We're conservative with matches, so ambiguous decls will be
11030 registered as different, then lead to a lookup error if the two
11031 modules are both visible. Perhaps we want to do something similar
11032 to duplicate decls to get ODR errors on loading? We already have
11033 some special casing for namespaces. */
11034
11035static tree
11036check_mergeable_decl (merge_kind mk, tree decl, tree ovl, merge_key const &key)
11037{
11038 tree found = NULL_TREE;
11039 for (ovl_iterator iter (ovl); !found && iter; ++iter)
11040 {
11041 tree match = *iter;
11042
11043 tree d_inner = decl;
11044 tree m_inner = match;
11045
11046 again:
11047 if (TREE_CODE (d_inner) != TREE_CODE (m_inner))
11048 {
11049 if (TREE_CODE (match) == NAMESPACE_DECL
11050 && !DECL_NAMESPACE_ALIAS (match))
11051 /* Namespaces are never overloaded. */
11052 found = match;
11053
11054 continue;
11055 }
11056
11057 switch (TREE_CODE (d_inner))
11058 {
11059 case TEMPLATE_DECL:
11060 if (template_heads_equivalent_p (d_inner, m_inner))
11061 {
11062 d_inner = DECL_TEMPLATE_RESULT (d_inner);
11063 m_inner = DECL_TEMPLATE_RESULT (m_inner);
11064 if (d_inner == error_mark_node
11065 && TYPE_DECL_ALIAS_P (m_inner))
11066 {
11067 found = match;
11068 break;
11069 }
11070 goto again;
11071 }
11072 break;
11073
11074 case FUNCTION_DECL:
11075 if (tree m_type = TREE_TYPE (m_inner))
11076 if ((!key.ret
11077 || same_type_p (key.ret, fndecl_declared_return_type (m_inner)))
11078 && type_memfn_rqual (m_type) == key.ref_q
11079 && compparms (key.args, TYPE_ARG_TYPES (m_type))
11080 /* Reject if old is a "C" builtin and new is not "C".
11081 Matches decls_match behaviour. */
11082 && (!DECL_IS_UNDECLARED_BUILTIN (m_inner)
11083 || !DECL_EXTERN_C_P (m_inner)
11084 || DECL_EXTERN_C_P (d_inner))
11085 /* Reject if one is a different member of a
11086 guarded/pre/post fn set. */
11087 && (!flag_contracts
11088 || (DECL_IS_PRE_FN_P (d_inner)
11089 == DECL_IS_PRE_FN_P (m_inner)))
11090 && (!flag_contracts
11091 || (DECL_IS_POST_FN_P (d_inner)
11092 == DECL_IS_POST_FN_P (m_inner))))
11093 {
11094 tree m_reqs = get_constraints (m_inner);
11095 if (m_reqs)
11096 {
11097 if (cxx_dialect < cxx20)
11098 m_reqs = CI_ASSOCIATED_CONSTRAINTS (m_reqs);
11099 else
11100 m_reqs = CI_DECLARATOR_REQS (m_reqs);
11101 }
11102
11103 if (cp_tree_equal (key.constraints, m_reqs))
11104 found = match;
11105 }
11106 break;
11107
11108 case TYPE_DECL:
11109 if (DECL_IMPLICIT_TYPEDEF_P (d_inner)
11110 == DECL_IMPLICIT_TYPEDEF_P (m_inner))
11111 {
11112 if (!IDENTIFIER_ANON_P (DECL_NAME (m_inner)))
11113 return match;
11114 else if (mk == MK_enum
11115 && (TYPE_NAME (ENUM_UNDERLYING_TYPE (TREE_TYPE (m_inner)))
11116 == key.ret))
11117 found = match;
11118 }
11119 break;
11120
11121 default:
11122 found = match;
11123 break;
11124 }
11125 }
11126
11127 return found;
11128}
11129
11130/* DECL, INNER & TYPE are a skeleton set of nodes for a decl. Only
11131 the bools have been filled in. Read its merging key and merge it.
11132 Returns the existing decl if there is one. */
11133
11134tree
11135trees_in::key_mergeable (int tag, merge_kind mk, tree decl, tree inner,
11136 tree type, tree container, bool is_attached)
11137{
11138 const char *kind = "new";
11139 tree existing = NULL_TREE;
11140
11141 if (mk & MK_template_mask)
11142 {
11143 // FIXME: We could stream the specialization hash?
11144 spec_entry spec;
11145 spec.tmpl = tree_node ();
11146 spec.args = tree_node ();
11147
11148 if (get_overrun ())
11149 return error_mark_node;
11150
11151 DECL_NAME (decl) = DECL_NAME (spec.tmpl);
11152 DECL_CONTEXT (decl) = DECL_CONTEXT (spec.tmpl);
11153 DECL_NAME (inner) = DECL_NAME (decl);
11154 DECL_CONTEXT (inner) = DECL_CONTEXT (decl);
11155
11156 tree constr = NULL_TREE;
11157 bool is_decl = mk & MK_tmpl_decl_mask;
11158 if (is_decl)
11159 {
11160 if (flag_concepts && TREE_CODE (inner) == VAR_DECL)
11161 {
11162 constr = tree_node ();
11163 if (constr)
11164 set_constraints (inner, constr);
11165 }
11166 spec.spec = (mk & MK_tmpl_tmpl_mask) ? inner : decl;
11167 }
11168 else
11169 spec.spec = type;
11170 existing = match_mergeable_specialization (is_decl, &spec);
11171 if (constr)
11172 /* We'll add these back later, if this is the new decl. */
11173 remove_constraints (inner);
11174
11175 if (!existing)
11176 ; /* We'll add to the table once read. */
11177 else if (mk & MK_tmpl_decl_mask)
11178 {
11179 /* A declaration specialization. */
11180 if (mk & MK_tmpl_tmpl_mask)
11181 existing = DECL_TI_TEMPLATE (existing);
11182 }
11183 else
11184 {
11185 /* A type specialization. */
11186 if (mk & MK_tmpl_tmpl_mask)
11187 existing = CLASSTYPE_TI_TEMPLATE (existing);
11188 else
11189 existing = TYPE_NAME (existing);
11190 }
11191 }
11192 else if (mk == MK_unique)
11193 kind = "unique";
11194 else
11195 {
11196 tree name = tree_node ();
11197
11198 merge_key key;
11199 unsigned code = u ();
11200 key.ref_q = cp_ref_qualifier ((code >> 0) & 3);
11201 key.index = code >> 2;
11202
11203 if (mk == MK_enum)
11204 key.ret = tree_node ();
11205 else if (mk == MK_partial
11206 || ((mk == MK_named || mk == MK_friend_spec)
11207 && TREE_CODE (inner) == FUNCTION_DECL))
11208 {
11209 key.ret = tree_node ();
11210 tree arg, *arg_ptr = &key.args;
11211 while ((arg = tree_node ())
11212 && arg != void_list_node
11213 && mk != MK_partial)
11214 {
11215 *arg_ptr = tree_cons (NULL_TREE, arg, NULL_TREE);
11216 arg_ptr = &TREE_CHAIN (*arg_ptr);
11217 }
11218 *arg_ptr = arg;
11219 key.constraints = tree_node ();
11220 }
11221
11222 if (get_overrun ())
11223 return error_mark_node;
11224
11225 if (mk < MK_indirect_lwm)
11226 {
11227 DECL_NAME (decl) = name;
11228 DECL_CONTEXT (decl) = FROB_CONTEXT (container);
11229 }
11230 DECL_NAME (inner) = DECL_NAME (decl);
11231 DECL_CONTEXT (inner) = DECL_CONTEXT (decl);
11232
11233 if (mk == MK_partial)
11234 {
11235 for (tree spec = DECL_TEMPLATE_SPECIALIZATIONS (key.ret);
11236 spec; spec = TREE_CHAIN (spec))
11237 {
11238 tree tmpl = TREE_VALUE (spec);
11239 tree ti = get_template_info (tmpl);
11240 if (template_args_equal (key.args, TI_ARGS (ti))
11241 && cp_tree_equal (key.constraints,
11242 get_constraints
11243 (DECL_TEMPLATE_RESULT (tmpl))))
11244 {
11245 existing = tmpl;
11246 break;
11247 }
11248 }
11249 }
11250 else if (mk == MK_keyed
11251 && DECL_LANG_SPECIFIC (name)
11252 && DECL_MODULE_KEYED_DECLS_P (name))
11253 {
11254 gcc_checking_assert (TREE_CODE (container) == NAMESPACE_DECL
11255 || TREE_CODE (container) == TYPE_DECL);
11256 if (auto *set = keyed_table->get (k: name))
11257 if (key.index < set->length ())
11258 {
11259 existing = (*set)[key.index];
11260 if (existing)
11261 {
11262 gcc_checking_assert
11263 (DECL_IMPLICIT_TYPEDEF_P (existing));
11264 if (inner != decl)
11265 existing
11266 = CLASSTYPE_TI_TEMPLATE (TREE_TYPE (existing));
11267 }
11268 }
11269 }
11270 else
11271 switch (TREE_CODE (container))
11272 {
11273 default:
11274 gcc_unreachable ();
11275
11276 case NAMESPACE_DECL:
11277 if (is_attached
11278 && !(state->is_module () || state->is_partition ()))
11279 kind = "unique";
11280 else
11281 {
11282 gcc_checking_assert (mk == MK_named || mk == MK_enum);
11283 tree mvec;
11284 tree *vslot = mergeable_namespace_slots (ns: container, name,
11285 is_attached, mvec: &mvec);
11286 existing = check_mergeable_decl (mk, decl, ovl: *vslot, key);
11287 if (!existing)
11288 add_mergeable_namespace_entity (slot: vslot, decl);
11289 else
11290 {
11291 /* Note that we now have duplicates to deal with in
11292 name lookup. */
11293 if (is_attached)
11294 BINDING_VECTOR_PARTITION_DUPS_P (mvec) = true;
11295 else
11296 BINDING_VECTOR_GLOBAL_DUPS_P (mvec) = true;
11297 }
11298 }
11299 break;
11300
11301 case FUNCTION_DECL:
11302 gcc_checking_assert (mk == MK_local_type);
11303 existing = key_local_type (key, fn: container, name);
11304 if (existing && inner != decl)
11305 existing = TYPE_TI_TEMPLATE (TREE_TYPE (existing));
11306 break;
11307
11308 case TYPE_DECL:
11309 if (is_attached && !(state->is_module () || state->is_partition ())
11310 /* Implicit member functions can come from
11311 anywhere. */
11312 && !(DECL_ARTIFICIAL (decl)
11313 && TREE_CODE (decl) == FUNCTION_DECL
11314 && !DECL_THUNK_P (decl)))
11315 kind = "unique";
11316 else
11317 {
11318 tree ctx = TREE_TYPE (container);
11319
11320 /* For some reason templated enumeral types are not marked
11321 as COMPLETE_TYPE_P, even though they have members.
11322 This may well be a bug elsewhere. */
11323 if (TREE_CODE (ctx) == ENUMERAL_TYPE)
11324 existing = find_enum_member (ctx, name);
11325 else if (COMPLETE_TYPE_P (ctx))
11326 {
11327 switch (mk)
11328 {
11329 default:
11330 gcc_unreachable ();
11331
11332 case MK_named:
11333 existing = lookup_class_binding (ctx, name);
11334 if (existing)
11335 {
11336 tree inner = decl;
11337 if (TREE_CODE (inner) == TEMPLATE_DECL
11338 && !DECL_MEMBER_TEMPLATE_P (inner))
11339 inner = DECL_TEMPLATE_RESULT (inner);
11340
11341 existing = check_mergeable_decl
11342 (mk, decl: inner, ovl: existing, key);
11343
11344 if (!existing && DECL_ALIAS_TEMPLATE_P (decl))
11345 {} // FIXME: Insert into specialization
11346 // tables, we'll need the arguments for that!
11347 }
11348 break;
11349
11350 case MK_field:
11351 {
11352 unsigned ix = key.index;
11353 for (tree field = TYPE_FIELDS (ctx);
11354 field; field = DECL_CHAIN (field))
11355 {
11356 tree finner = STRIP_TEMPLATE (field);
11357 if (TREE_CODE (finner) == TREE_CODE (inner))
11358 if (!ix--)
11359 {
11360 existing = field;
11361 break;
11362 }
11363 }
11364 }
11365 break;
11366
11367 case MK_vtable:
11368 {
11369 unsigned ix = key.index;
11370 for (tree vtable = CLASSTYPE_VTABLES (ctx);
11371 vtable; vtable = DECL_CHAIN (vtable))
11372 if (!ix--)
11373 {
11374 existing = vtable;
11375 break;
11376 }
11377 }
11378 break;
11379
11380 case MK_as_base:
11381 {
11382 tree as_base = CLASSTYPE_AS_BASE (ctx);
11383 if (as_base && as_base != ctx)
11384 existing = TYPE_NAME (as_base);
11385 }
11386 break;
11387
11388 case MK_local_friend:
11389 {
11390 unsigned ix = key.index;
11391 for (tree decls = CLASSTYPE_DECL_LIST (ctx);
11392 decls; decls = TREE_CHAIN (decls))
11393 if (!TREE_PURPOSE (decls) && !ix--)
11394 {
11395 existing
11396 = friend_from_decl_list (TREE_VALUE (decls));
11397 break;
11398 }
11399 }
11400 break;
11401 }
11402
11403 if (existing && mk < MK_indirect_lwm && mk != MK_partial
11404 && TREE_CODE (decl) == TEMPLATE_DECL
11405 && !DECL_MEMBER_TEMPLATE_P (decl))
11406 {
11407 tree ti;
11408 if (DECL_IMPLICIT_TYPEDEF_P (existing))
11409 ti = TYPE_TEMPLATE_INFO (TREE_TYPE (existing));
11410 else
11411 ti = DECL_TEMPLATE_INFO (existing);
11412 existing = TI_TEMPLATE (ti);
11413 }
11414 }
11415 }
11416 }
11417 }
11418
11419 dump (dumper::MERGE)
11420 && dump ("Read:%d's %s merge key (%s) %C:%N", tag, merge_kind_name[mk],
11421 existing ? "matched" : kind, TREE_CODE (decl), decl);
11422
11423 return existing;
11424}
11425
11426void
11427trees_out::binfo_mergeable (tree binfo)
11428{
11429 tree dom = binfo;
11430 while (tree parent = BINFO_INHERITANCE_CHAIN (dom))
11431 dom = parent;
11432 tree type = BINFO_TYPE (dom);
11433 gcc_checking_assert (TYPE_BINFO (type) == dom);
11434 tree_node (t: type);
11435 if (streaming_p ())
11436 {
11437 unsigned ix = 0;
11438 for (; dom != binfo; dom = TREE_CHAIN (dom))
11439 ix++;
11440 u (v: ix);
11441 }
11442}
11443
11444unsigned
11445trees_in::binfo_mergeable (tree *type)
11446{
11447 *type = tree_node ();
11448 return u ();
11449}
11450
11451/* DECL is a just streamed mergeable decl that should match EXISTING. Check
11452 it does and issue an appropriate diagnostic if not. Merge any
11453 bits from DECL to EXISTING. This is stricter matching than
11454 decls_match, because we can rely on ODR-sameness, and we cannot use
11455 decls_match because it can cause instantiations of constraints. */
11456
11457bool
11458trees_in::is_matching_decl (tree existing, tree decl, bool is_typedef)
11459{
11460 // FIXME: We should probably do some duplicate decl-like stuff here
11461 // (beware, default parms should be the same?) Can we just call
11462 // duplicate_decls and teach it how to handle the module-specific
11463 // permitted/required duplications?
11464
11465 // We know at this point that the decls have matched by key, so we
11466 // can elide some of the checking
11467 gcc_checking_assert (TREE_CODE (existing) == TREE_CODE (decl));
11468
11469 tree d_inner = decl;
11470 tree e_inner = existing;
11471 if (TREE_CODE (decl) == TEMPLATE_DECL)
11472 {
11473 d_inner = DECL_TEMPLATE_RESULT (d_inner);
11474 e_inner = DECL_TEMPLATE_RESULT (e_inner);
11475 gcc_checking_assert (TREE_CODE (e_inner) == TREE_CODE (d_inner));
11476 }
11477
11478 if (TREE_CODE (d_inner) == FUNCTION_DECL)
11479 {
11480 tree e_ret = fndecl_declared_return_type (existing);
11481 tree d_ret = fndecl_declared_return_type (decl);
11482
11483 if (decl != d_inner && DECL_NAME (d_inner) == fun_identifier
11484 && LAMBDA_TYPE_P (DECL_CONTEXT (d_inner)))
11485 /* This has a recursive type that will compare different. */;
11486 else if (!same_type_p (d_ret, e_ret))
11487 goto mismatch;
11488
11489 tree e_type = TREE_TYPE (e_inner);
11490 tree d_type = TREE_TYPE (d_inner);
11491
11492 if (DECL_EXTERN_C_P (d_inner) != DECL_EXTERN_C_P (e_inner))
11493 goto mismatch;
11494
11495 for (tree e_args = TYPE_ARG_TYPES (e_type),
11496 d_args = TYPE_ARG_TYPES (d_type);
11497 e_args != d_args && (e_args || d_args);
11498 e_args = TREE_CHAIN (e_args), d_args = TREE_CHAIN (d_args))
11499 {
11500 if (!(e_args && d_args))
11501 goto mismatch;
11502
11503 if (!same_type_p (TREE_VALUE (d_args), TREE_VALUE (e_args)))
11504 goto mismatch;
11505
11506 // FIXME: Check default values
11507 }
11508
11509 /* If EXISTING has an undeduced or uninstantiated exception
11510 specification, but DECL does not, propagate the exception
11511 specification. Otherwise we end up asserting or trying to
11512 instantiate it in the middle of loading. */
11513 tree e_spec = TYPE_RAISES_EXCEPTIONS (e_type);
11514 tree d_spec = TYPE_RAISES_EXCEPTIONS (d_type);
11515 if (DEFERRED_NOEXCEPT_SPEC_P (e_spec))
11516 {
11517 if (!DEFERRED_NOEXCEPT_SPEC_P (d_spec)
11518 || (UNEVALUATED_NOEXCEPT_SPEC_P (e_spec)
11519 && !UNEVALUATED_NOEXCEPT_SPEC_P (d_spec)))
11520 {
11521 dump (dumper::MERGE)
11522 && dump ("Propagating instantiated noexcept to %N", existing);
11523 TREE_TYPE (existing) = d_type;
11524
11525 /* Propagate to existing clones. */
11526 tree clone;
11527 FOR_EACH_CLONE (clone, existing)
11528 {
11529 if (TREE_TYPE (clone) == e_type)
11530 TREE_TYPE (clone) = d_type;
11531 else
11532 TREE_TYPE (clone)
11533 = build_exception_variant (TREE_TYPE (clone), d_spec);
11534 }
11535 }
11536 }
11537 else if (!DEFERRED_NOEXCEPT_SPEC_P (d_spec)
11538 && !comp_except_specs (d_spec, e_spec, ce_type))
11539 goto mismatch;
11540
11541 /* Similarly if EXISTING has an undeduced return type, but DECL's
11542 is already deduced. */
11543 if (undeduced_auto_decl (existing) && !undeduced_auto_decl (decl))
11544 {
11545 dump (dumper::MERGE)
11546 && dump ("Propagating deduced return type to %N", existing);
11547 TREE_TYPE (existing) = change_return_type (TREE_TYPE (d_type), e_type);
11548 }
11549 }
11550 else if (is_typedef)
11551 {
11552 if (!DECL_ORIGINAL_TYPE (e_inner)
11553 || !same_type_p (DECL_ORIGINAL_TYPE (d_inner),
11554 DECL_ORIGINAL_TYPE (e_inner)))
11555 goto mismatch;
11556 }
11557 /* Using cp_tree_equal because we can meet TYPE_ARGUMENT_PACKs
11558 here. I suspect the entities that directly do that are things
11559 that shouldn't go to duplicate_decls (FIELD_DECLs etc). */
11560 else if (!cp_tree_equal (TREE_TYPE (decl), TREE_TYPE (existing)))
11561 {
11562 mismatch:
11563 if (DECL_IS_UNDECLARED_BUILTIN (existing))
11564 /* Just like duplicate_decls, presum the user knows what
11565 they're doing in overriding a builtin. */
11566 TREE_TYPE (existing) = TREE_TYPE (decl);
11567 else if (decl_function_context (decl))
11568 /* The type of a mergeable local entity (such as a function scope
11569 capturing lambda's closure type fields) can depend on an
11570 unmergeable local entity (such as a local variable), so type
11571 equality isn't feasible in general for local entities. */;
11572 else
11573 {
11574 // FIXME:QOI Might be template specialization from a module,
11575 // not necessarily global module
11576 error_at (DECL_SOURCE_LOCATION (decl),
11577 "conflicting global module declaration %#qD", decl);
11578 inform (DECL_SOURCE_LOCATION (existing),
11579 "existing declaration %#qD", existing);
11580 return false;
11581 }
11582 }
11583
11584 if (DECL_IS_UNDECLARED_BUILTIN (existing)
11585 && !DECL_IS_UNDECLARED_BUILTIN (decl))
11586 {
11587 /* We're matching a builtin that the user has yet to declare.
11588 We are the one! This is very much duplicate-decl
11589 shenanigans. */
11590 DECL_SOURCE_LOCATION (existing) = DECL_SOURCE_LOCATION (decl);
11591 if (TREE_CODE (decl) != TYPE_DECL)
11592 {
11593 /* Propagate exceptions etc. */
11594 TREE_TYPE (existing) = TREE_TYPE (decl);
11595 TREE_NOTHROW (existing) = TREE_NOTHROW (decl);
11596 }
11597 /* This is actually an import! */
11598 DECL_MODULE_IMPORT_P (existing) = true;
11599
11600 /* Yay, sliced! */
11601 existing->base = decl->base;
11602
11603 if (TREE_CODE (decl) == FUNCTION_DECL)
11604 {
11605 /* Ew :( */
11606 memcpy (dest: &existing->decl_common.size,
11607 src: &decl->decl_common.size,
11608 n: (offsetof (tree_decl_common, pt_uid)
11609 - offsetof (tree_decl_common, size)));
11610 auto bltin_class = DECL_BUILT_IN_CLASS (decl);
11611 existing->function_decl.built_in_class = bltin_class;
11612 auto fncode = DECL_UNCHECKED_FUNCTION_CODE (decl);
11613 DECL_UNCHECKED_FUNCTION_CODE (existing) = fncode;
11614 if (existing->function_decl.built_in_class == BUILT_IN_NORMAL)
11615 {
11616 if (builtin_decl_explicit_p (fncode: built_in_function (fncode)))
11617 switch (fncode)
11618 {
11619 case BUILT_IN_STPCPY:
11620 set_builtin_decl_implicit_p
11621 (fncode: built_in_function (fncode), implicit_p: true);
11622 break;
11623 default:
11624 set_builtin_decl_declared_p
11625 (fncode: built_in_function (fncode), declared_p: true);
11626 break;
11627 }
11628 copy_attributes_to_builtin (decl);
11629 }
11630 }
11631 }
11632
11633 if (VAR_OR_FUNCTION_DECL_P (decl)
11634 && DECL_TEMPLATE_INSTANTIATED (decl))
11635 /* Don't instantiate again! */
11636 DECL_TEMPLATE_INSTANTIATED (existing) = true;
11637
11638 if (TREE_CODE (d_inner) == FUNCTION_DECL
11639 && DECL_DECLARED_INLINE_P (d_inner))
11640 DECL_DECLARED_INLINE_P (e_inner) = true;
11641 if (!DECL_EXTERNAL (d_inner))
11642 DECL_EXTERNAL (e_inner) = false;
11643
11644 // FIXME: Check default tmpl and fn parms here
11645
11646 return true;
11647}
11648
11649/* FN is an implicit member function that we've discovered is new to
11650 the class. Add it to the TYPE_FIELDS chain and the method vector.
11651 Reset the appropriate classtype lazy flag. */
11652
11653bool
11654trees_in::install_implicit_member (tree fn)
11655{
11656 tree ctx = DECL_CONTEXT (fn);
11657 tree name = DECL_NAME (fn);
11658 /* We know these are synthesized, so the set of expected prototypes
11659 is quite restricted. We're not validating correctness, just
11660 distinguishing beteeen the small set of possibilities. */
11661 tree parm_type = TREE_VALUE (FUNCTION_FIRST_USER_PARMTYPE (fn));
11662 if (IDENTIFIER_CTOR_P (name))
11663 {
11664 if (CLASSTYPE_LAZY_DEFAULT_CTOR (ctx)
11665 && VOID_TYPE_P (parm_type))
11666 CLASSTYPE_LAZY_DEFAULT_CTOR (ctx) = false;
11667 else if (!TYPE_REF_P (parm_type))
11668 return false;
11669 else if (CLASSTYPE_LAZY_COPY_CTOR (ctx)
11670 && !TYPE_REF_IS_RVALUE (parm_type))
11671 CLASSTYPE_LAZY_COPY_CTOR (ctx) = false;
11672 else if (CLASSTYPE_LAZY_MOVE_CTOR (ctx))
11673 CLASSTYPE_LAZY_MOVE_CTOR (ctx) = false;
11674 else
11675 return false;
11676 }
11677 else if (IDENTIFIER_DTOR_P (name))
11678 {
11679 if (CLASSTYPE_LAZY_DESTRUCTOR (ctx))
11680 CLASSTYPE_LAZY_DESTRUCTOR (ctx) = false;
11681 else
11682 return false;
11683 if (DECL_VIRTUAL_P (fn))
11684 /* A virtual dtor should have been created when the class
11685 became complete. */
11686 return false;
11687 }
11688 else if (name == assign_op_identifier)
11689 {
11690 if (!TYPE_REF_P (parm_type))
11691 return false;
11692 else if (CLASSTYPE_LAZY_COPY_ASSIGN (ctx)
11693 && !TYPE_REF_IS_RVALUE (parm_type))
11694 CLASSTYPE_LAZY_COPY_ASSIGN (ctx) = false;
11695 else if (CLASSTYPE_LAZY_MOVE_ASSIGN (ctx))
11696 CLASSTYPE_LAZY_MOVE_ASSIGN (ctx) = false;
11697 else
11698 return false;
11699 }
11700 else
11701 return false;
11702
11703 dump (dumper::MERGE) && dump ("Adding implicit member %N", fn);
11704
11705 DECL_CHAIN (fn) = TYPE_FIELDS (ctx);
11706 TYPE_FIELDS (ctx) = fn;
11707
11708 add_method (ctx, fn, false);
11709
11710 /* Propagate TYPE_FIELDS. */
11711 fixup_type_variants (ctx);
11712
11713 return true;
11714}
11715
11716/* Return non-zero if DECL has a definition that would be interesting to
11717 write out. */
11718
11719static bool
11720has_definition (tree decl)
11721{
11722 bool is_tmpl = TREE_CODE (decl) == TEMPLATE_DECL;
11723 if (is_tmpl)
11724 decl = DECL_TEMPLATE_RESULT (decl);
11725
11726 switch (TREE_CODE (decl))
11727 {
11728 default:
11729 break;
11730
11731 case FUNCTION_DECL:
11732 if (!DECL_SAVED_TREE (decl))
11733 /* Not defined. */
11734 break;
11735
11736 if (DECL_DECLARED_INLINE_P (decl))
11737 return true;
11738
11739 if (DECL_THIS_STATIC (decl)
11740 && (header_module_p ()
11741 || (!DECL_LANG_SPECIFIC (decl) || !DECL_MODULE_PURVIEW_P (decl))))
11742 /* GM static function. */
11743 return true;
11744
11745 if (DECL_TEMPLATE_INFO (decl))
11746 {
11747 int use_tpl = DECL_USE_TEMPLATE (decl);
11748
11749 // FIXME: Partial specializations have definitions too.
11750 if (use_tpl < 2)
11751 return true;
11752 }
11753 break;
11754
11755 case TYPE_DECL:
11756 {
11757 tree type = TREE_TYPE (decl);
11758 if (type == TYPE_MAIN_VARIANT (type)
11759 && decl == TYPE_NAME (type)
11760 && (TREE_CODE (type) == ENUMERAL_TYPE
11761 ? TYPE_VALUES (type) : TYPE_FIELDS (type)))
11762 return true;
11763 }
11764 break;
11765
11766 case VAR_DECL:
11767 /* DECL_INITIALIZED_P might not be set on a dependent VAR_DECL. */
11768 if (DECL_LANG_SPECIFIC (decl)
11769 && DECL_TEMPLATE_INFO (decl)
11770 && DECL_INITIAL (decl))
11771 return true;
11772 else
11773 {
11774 if (!DECL_INITIALIZED_P (decl))
11775 return false;
11776
11777 if (header_module_p ()
11778 || (!DECL_LANG_SPECIFIC (decl) || !DECL_MODULE_PURVIEW_P (decl)))
11779 /* GM static variable. */
11780 return true;
11781
11782 if (!TREE_CONSTANT (decl))
11783 return false;
11784
11785 return true;
11786 }
11787 break;
11788
11789 case CONCEPT_DECL:
11790 if (DECL_INITIAL (decl))
11791 return true;
11792
11793 break;
11794 }
11795
11796 return false;
11797}
11798
11799uintptr_t *
11800trees_in::find_duplicate (tree existing)
11801{
11802 if (!duplicates)
11803 return NULL;
11804
11805 return duplicates->get (k: existing);
11806}
11807
11808/* We're starting to read a duplicate DECL. EXISTING is the already
11809 known node. */
11810
11811void
11812trees_in::register_duplicate (tree decl, tree existing)
11813{
11814 if (!duplicates)
11815 duplicates = new duplicate_hash_map (40);
11816
11817 bool existed;
11818 uintptr_t &slot = duplicates->get_or_insert (k: existing, existed: &existed);
11819 gcc_checking_assert (!existed);
11820 slot = reinterpret_cast<uintptr_t> (decl);
11821
11822 if (TREE_CODE (decl) == TEMPLATE_DECL)
11823 /* Also register the DECL_TEMPLATE_RESULT as a duplicate so
11824 that passing decl's _RESULT to maybe_duplicate naturally
11825 gives us existing's _RESULT back. */
11826 register_duplicate (DECL_TEMPLATE_RESULT (decl),
11827 DECL_TEMPLATE_RESULT (existing));
11828}
11829
11830/* We've read a definition of MAYBE_EXISTING. If not a duplicate,
11831 return MAYBE_EXISTING (into which the definition should be
11832 installed). Otherwise return NULL if already known bad, or the
11833 duplicate we read (for ODR checking, or extracting additional merge
11834 information). */
11835
11836tree
11837trees_in::odr_duplicate (tree maybe_existing, bool has_defn)
11838{
11839 tree res = NULL_TREE;
11840
11841 if (uintptr_t *dup = find_duplicate (existing: maybe_existing))
11842 {
11843 if (!(*dup & 1))
11844 res = reinterpret_cast<tree> (*dup);
11845 }
11846 else
11847 res = maybe_existing;
11848
11849 assert_definition (decl: maybe_existing, installing: res && !has_defn);
11850
11851 // FIXME: We probably need to return the template, so that the
11852 // template header can be checked?
11853 return res ? STRIP_TEMPLATE (res) : NULL_TREE;
11854}
11855
11856/* The following writer functions rely on the current behaviour of
11857 depset::hash::add_dependency making the decl and defn depset nodes
11858 depend on eachother. That way we don't have to worry about seeding
11859 the tree map with named decls that cannot be looked up by name (I.e
11860 template and function parms). We know the decl and definition will
11861 be in the same cluster, which is what we want. */
11862
11863void
11864trees_out::write_function_def (tree decl)
11865{
11866 tree_node (DECL_RESULT (decl));
11867 tree_node (DECL_INITIAL (decl));
11868 tree_node (DECL_SAVED_TREE (decl));
11869 tree_node (DECL_FRIEND_CONTEXT (decl));
11870
11871 constexpr_fundef *cexpr = retrieve_constexpr_fundef (decl);
11872
11873 if (streaming_p ())
11874 u (v: cexpr != nullptr);
11875 if (cexpr)
11876 {
11877 chained_decls (decls: cexpr->parms);
11878 tree_node (t: cexpr->result);
11879 tree_node (t: cexpr->body);
11880 }
11881
11882 function* f = DECL_STRUCT_FUNCTION (decl);
11883
11884 if (streaming_p ())
11885 {
11886 unsigned flags = 0;
11887
11888 if (f)
11889 flags |= 2;
11890 if (DECL_NOT_REALLY_EXTERN (decl))
11891 flags |= 1;
11892
11893 u (v: flags);
11894 }
11895
11896 if (state && f)
11897 {
11898 state->write_location (*this, f->function_start_locus);
11899 state->write_location (*this, f->function_end_locus);
11900 }
11901}
11902
11903void
11904trees_out::mark_function_def (tree)
11905{
11906}
11907
11908bool
11909trees_in::read_function_def (tree decl, tree maybe_template)
11910{
11911 dump () && dump ("Reading function definition %N", decl);
11912 tree result = tree_node ();
11913 tree initial = tree_node ();
11914 tree saved = tree_node ();
11915 tree context = tree_node ();
11916 constexpr_fundef cexpr;
11917 post_process_data pdata {};
11918 pdata.decl = maybe_template;
11919
11920 tree maybe_dup = odr_duplicate (maybe_existing: maybe_template, DECL_SAVED_TREE (decl));
11921 bool installing = maybe_dup && !DECL_SAVED_TREE (decl);
11922
11923 if (u ())
11924 {
11925 cexpr.parms = chained_decls ();
11926 cexpr.result = tree_node ();
11927 cexpr.body = tree_node ();
11928 cexpr.decl = decl;
11929 }
11930 else
11931 cexpr.decl = NULL_TREE;
11932
11933 unsigned flags = u ();
11934
11935 if (flags & 2)
11936 {
11937 pdata.start_locus = state->read_location (*this);
11938 pdata.end_locus = state->read_location (*this);
11939 }
11940
11941 if (get_overrun ())
11942 return NULL_TREE;
11943
11944 if (installing)
11945 {
11946 DECL_NOT_REALLY_EXTERN (decl) = flags & 1;
11947 DECL_RESULT (decl) = result;
11948 DECL_INITIAL (decl) = initial;
11949 DECL_SAVED_TREE (decl) = saved;
11950
11951 if (context)
11952 SET_DECL_FRIEND_CONTEXT (decl, context);
11953 if (cexpr.decl)
11954 register_constexpr_fundef (cexpr);
11955 post_process (data: pdata);
11956 }
11957 else if (maybe_dup)
11958 {
11959 // FIXME:QOI Check matching defn
11960 }
11961
11962 return true;
11963}
11964
11965/* Also for CONCEPT_DECLs. */
11966
11967void
11968trees_out::write_var_def (tree decl)
11969{
11970 tree init = DECL_INITIAL (decl);
11971 tree_node (t: init);
11972 if (!init)
11973 {
11974 tree dyn_init = NULL_TREE;
11975
11976 /* We only need to write initializers in header modules. */
11977 if (header_module_p () && DECL_NONTRIVIALLY_INITIALIZED_P (decl))
11978 {
11979 dyn_init = value_member (decl,
11980 CP_DECL_THREAD_LOCAL_P (decl)
11981 ? tls_aggregates : static_aggregates);
11982 gcc_checking_assert (dyn_init);
11983 /* Mark it so write_inits knows this is needed. */
11984 TREE_LANG_FLAG_0 (dyn_init) = true;
11985 dyn_init = TREE_PURPOSE (dyn_init);
11986 }
11987 tree_node (t: dyn_init);
11988 }
11989}
11990
11991void
11992trees_out::mark_var_def (tree)
11993{
11994}
11995
11996bool
11997trees_in::read_var_def (tree decl, tree maybe_template)
11998{
11999 /* Do not mark the virtual table entries as used. */
12000 bool vtable = VAR_P (decl) && DECL_VTABLE_OR_VTT_P (decl);
12001 unused += vtable;
12002 tree init = tree_node ();
12003 tree dyn_init = init ? NULL_TREE : tree_node ();
12004 unused -= vtable;
12005
12006 if (get_overrun ())
12007 return false;
12008
12009 bool initialized = (VAR_P (decl) ? bool (DECL_INITIALIZED_P (decl))
12010 : bool (DECL_INITIAL (decl)));
12011 tree maybe_dup = odr_duplicate (maybe_existing: maybe_template, has_defn: initialized);
12012 bool installing = maybe_dup && !initialized;
12013 if (installing)
12014 {
12015 if (DECL_EXTERNAL (decl))
12016 DECL_NOT_REALLY_EXTERN (decl) = true;
12017 if (VAR_P (decl))
12018 {
12019 DECL_INITIALIZED_P (decl) = true;
12020 if (maybe_dup && DECL_INITIALIZED_BY_CONSTANT_EXPRESSION_P (maybe_dup))
12021 DECL_INITIALIZED_BY_CONSTANT_EXPRESSION_P (decl) = true;
12022 if (DECL_IMPLICIT_INSTANTIATION (decl)
12023 || (DECL_CLASS_SCOPE_P (decl)
12024 && !DECL_VTABLE_OR_VTT_P (decl)
12025 && !DECL_TEMPLATE_INFO (decl)))
12026 note_vague_linkage_variable (decl);
12027 }
12028 DECL_INITIAL (decl) = init;
12029 if (!dyn_init)
12030 ;
12031 else if (CP_DECL_THREAD_LOCAL_P (decl))
12032 tls_aggregates = tree_cons (dyn_init, decl, tls_aggregates);
12033 else
12034 static_aggregates = tree_cons (dyn_init, decl, static_aggregates);
12035 }
12036 else if (maybe_dup)
12037 {
12038 // FIXME:QOI Check matching defn
12039 }
12040
12041 return true;
12042}
12043
12044/* If MEMBER doesn't have an independent life outside the class,
12045 return it (or its TEMPLATE_DECL). Otherwise NULL. */
12046
12047static tree
12048member_owned_by_class (tree member)
12049{
12050 gcc_assert (DECL_P (member));
12051
12052 /* Clones are owned by their origin. */
12053 if (DECL_CLONED_FUNCTION_P (member))
12054 return NULL;
12055
12056 if (TREE_CODE (member) == FIELD_DECL)
12057 /* FIELD_DECLS can have template info in some cases. We always
12058 want the FIELD_DECL though, as there's never a TEMPLATE_DECL
12059 wrapping them. */
12060 return member;
12061
12062 int use_tpl = -1;
12063 if (tree ti = node_template_info (decl: member, use&: use_tpl))
12064 {
12065 // FIXME: Don't bail on things that CANNOT have their own
12066 // template header. No, make sure they're in the same cluster.
12067 if (use_tpl > 0)
12068 return NULL_TREE;
12069
12070 if (DECL_TEMPLATE_RESULT (TI_TEMPLATE (ti)) == member)
12071 member = TI_TEMPLATE (ti);
12072 }
12073 return member;
12074}
12075
12076void
12077trees_out::write_class_def (tree defn)
12078{
12079 gcc_assert (DECL_P (defn));
12080 if (streaming_p ())
12081 dump () && dump ("Writing class definition %N", defn);
12082
12083 tree type = TREE_TYPE (defn);
12084 tree_node (TYPE_SIZE (type));
12085 tree_node (TYPE_SIZE_UNIT (type));
12086 tree_node (TYPE_VFIELD (type));
12087 tree_node (TYPE_BINFO (type));
12088
12089 vec_chained_decls (TYPE_FIELDS (type));
12090
12091 /* Every class but __as_base has a type-specific. */
12092 gcc_checking_assert (!TYPE_LANG_SPECIFIC (type) == IS_FAKE_BASE_TYPE (type));
12093
12094 if (TYPE_LANG_SPECIFIC (type))
12095 {
12096 {
12097 vec<tree, va_gc> *v = CLASSTYPE_MEMBER_VEC (type);
12098 if (!v)
12099 {
12100 gcc_checking_assert (!streaming_p ());
12101 /* Force a class vector. */
12102 v = set_class_bindings (type, extra: -1);
12103 gcc_checking_assert (v);
12104 }
12105
12106 unsigned len = v->length ();
12107 if (streaming_p ())
12108 u (v: len);
12109 for (unsigned ix = 0; ix != len; ix++)
12110 {
12111 tree m = (*v)[ix];
12112 if (TREE_CODE (m) == TYPE_DECL
12113 && DECL_ARTIFICIAL (m)
12114 && TYPE_STUB_DECL (TREE_TYPE (m)) == m)
12115 /* This is a using-decl for a type, or an anonymous
12116 struct (maybe with a typedef name). Write the type. */
12117 m = TREE_TYPE (m);
12118 tree_node (t: m);
12119 }
12120 }
12121 tree_node (CLASSTYPE_LAMBDA_EXPR (type));
12122
12123 /* TYPE_CONTAINS_VPTR_P looks at the vbase vector, which the
12124 reader won't know at this point. */
12125 int has_vptr = TYPE_CONTAINS_VPTR_P (type);
12126
12127 if (streaming_p ())
12128 {
12129 unsigned nvbases = vec_safe_length (CLASSTYPE_VBASECLASSES (type));
12130 u (v: nvbases);
12131 i (v: has_vptr);
12132 }
12133
12134 if (has_vptr)
12135 {
12136 tree_vec (CLASSTYPE_PURE_VIRTUALS (type));
12137 tree_pair_vec (CLASSTYPE_VCALL_INDICES (type));
12138 tree_node (CLASSTYPE_KEY_METHOD (type));
12139 }
12140 }
12141
12142 if (TYPE_LANG_SPECIFIC (type))
12143 {
12144 tree_node (CLASSTYPE_PRIMARY_BINFO (type));
12145
12146 tree as_base = CLASSTYPE_AS_BASE (type);
12147 if (as_base)
12148 as_base = TYPE_NAME (as_base);
12149 tree_node (t: as_base);
12150
12151 /* Write the vtables. */
12152 tree vtables = CLASSTYPE_VTABLES (type);
12153 vec_chained_decls (decls: vtables);
12154 for (; vtables; vtables = TREE_CHAIN (vtables))
12155 write_definition (decl: vtables);
12156
12157 /* Write the friend classes. */
12158 tree_list (CLASSTYPE_FRIEND_CLASSES (type), has_purpose: false);
12159
12160 /* Write the friend functions. */
12161 for (tree friends = DECL_FRIENDLIST (defn);
12162 friends; friends = TREE_CHAIN (friends))
12163 {
12164 /* Name of these friends. */
12165 tree_node (TREE_PURPOSE (friends));
12166 tree_list (TREE_VALUE (friends), has_purpose: false);
12167 }
12168 /* End of friend fns. */
12169 tree_node (NULL_TREE);
12170
12171 /* Write the decl list. */
12172 tree_list (CLASSTYPE_DECL_LIST (type), has_purpose: true);
12173
12174 if (TYPE_CONTAINS_VPTR_P (type))
12175 {
12176 /* Write the thunks. */
12177 for (tree decls = TYPE_FIELDS (type);
12178 decls; decls = DECL_CHAIN (decls))
12179 if (TREE_CODE (decls) == FUNCTION_DECL
12180 && DECL_VIRTUAL_P (decls)
12181 && DECL_THUNKS (decls))
12182 {
12183 tree_node (t: decls);
12184 /* Thunks are always unique, so chaining is ok. */
12185 chained_decls (DECL_THUNKS (decls));
12186 }
12187 tree_node (NULL_TREE);
12188 }
12189 }
12190}
12191
12192void
12193trees_out::mark_class_member (tree member, bool do_defn)
12194{
12195 gcc_assert (DECL_P (member));
12196
12197 member = member_owned_by_class (member);
12198 if (member)
12199 mark_declaration (decl: member, do_defn: do_defn && has_definition (decl: member));
12200}
12201
12202void
12203trees_out::mark_class_def (tree defn)
12204{
12205 gcc_assert (DECL_P (defn));
12206 tree type = TREE_TYPE (defn);
12207 /* Mark the class members that are not type-decls and cannot have
12208 independent definitions. */
12209 for (tree member = TYPE_FIELDS (type); member; member = DECL_CHAIN (member))
12210 if (TREE_CODE (member) == FIELD_DECL
12211 || TREE_CODE (member) == USING_DECL
12212 /* A cloned enum-decl from 'using enum unrelated;' */
12213 || (TREE_CODE (member) == CONST_DECL
12214 && DECL_CONTEXT (member) == type))
12215 {
12216 mark_class_member (member);
12217 if (TREE_CODE (member) == FIELD_DECL)
12218 if (tree repr = DECL_BIT_FIELD_REPRESENTATIVE (member))
12219 /* If we're marking a class template definition, then
12220 this'll contain the width (as set by grokbitfield)
12221 instead of a decl. */
12222 if (DECL_P (repr))
12223 mark_declaration (decl: repr, do_defn: false);
12224 }
12225
12226 /* Mark the binfo hierarchy. */
12227 for (tree child = TYPE_BINFO (type); child; child = TREE_CHAIN (child))
12228 mark_by_value (decl: child);
12229
12230 if (TYPE_LANG_SPECIFIC (type))
12231 {
12232 for (tree vtable = CLASSTYPE_VTABLES (type);
12233 vtable; vtable = TREE_CHAIN (vtable))
12234 mark_declaration (decl: vtable, do_defn: true);
12235
12236 if (TYPE_CONTAINS_VPTR_P (type))
12237 /* Mark the thunks, they belong to the class definition,
12238 /not/ the thunked-to function. */
12239 for (tree decls = TYPE_FIELDS (type);
12240 decls; decls = DECL_CHAIN (decls))
12241 if (TREE_CODE (decls) == FUNCTION_DECL)
12242 for (tree thunks = DECL_THUNKS (decls);
12243 thunks; thunks = DECL_CHAIN (thunks))
12244 mark_declaration (decl: thunks, do_defn: false);
12245 }
12246}
12247
12248/* Nop sorting, needed for resorting the member vec. */
12249
12250static void
12251nop (void *, void *, void *)
12252{
12253}
12254
12255bool
12256trees_in::read_class_def (tree defn, tree maybe_template)
12257{
12258 gcc_assert (DECL_P (defn));
12259 dump () && dump ("Reading class definition %N", defn);
12260 tree type = TREE_TYPE (defn);
12261 tree size = tree_node ();
12262 tree size_unit = tree_node ();
12263 tree vfield = tree_node ();
12264 tree binfo = tree_node ();
12265 vec<tree, va_gc> *vbase_vec = NULL;
12266 vec<tree, va_gc> *member_vec = NULL;
12267 vec<tree, va_gc> *pure_virts = NULL;
12268 vec<tree_pair_s, va_gc> *vcall_indices = NULL;
12269 tree key_method = NULL_TREE;
12270 tree lambda = NULL_TREE;
12271
12272 /* Read the fields. */
12273 vec<tree, va_heap> *fields = vec_chained_decls ();
12274
12275 if (TYPE_LANG_SPECIFIC (type))
12276 {
12277 if (unsigned len = u ())
12278 {
12279 vec_alloc (v&: member_vec, nelems: len);
12280 for (unsigned ix = 0; ix != len; ix++)
12281 {
12282 tree m = tree_node ();
12283 if (get_overrun ())
12284 break;
12285 if (TYPE_P (m))
12286 m = TYPE_STUB_DECL (m);
12287 member_vec->quick_push (obj: m);
12288 }
12289 }
12290 lambda = tree_node ();
12291
12292 if (!get_overrun ())
12293 {
12294 unsigned nvbases = u ();
12295 if (nvbases)
12296 {
12297 vec_alloc (v&: vbase_vec, nelems: nvbases);
12298 for (tree child = binfo; child; child = TREE_CHAIN (child))
12299 if (BINFO_VIRTUAL_P (child))
12300 vbase_vec->quick_push (obj: child);
12301 }
12302 }
12303
12304 if (!get_overrun ())
12305 {
12306 int has_vptr = i ();
12307 if (has_vptr)
12308 {
12309 pure_virts = tree_vec ();
12310 vcall_indices = tree_pair_vec ();
12311 key_method = tree_node ();
12312 }
12313 }
12314 }
12315
12316 tree maybe_dup = odr_duplicate (maybe_existing: maybe_template, TYPE_SIZE (type));
12317 bool installing = maybe_dup && !TYPE_SIZE (type);
12318 if (installing)
12319 {
12320 if (maybe_dup != defn)
12321 {
12322 // FIXME: This is needed on other defns too, almost
12323 // duplicate-decl like? See is_matching_decl too.
12324 /* Copy flags from the duplicate. */
12325 tree type_dup = TREE_TYPE (maybe_dup);
12326
12327 /* Core pieces. */
12328 TYPE_MODE_RAW (type) = TYPE_MODE_RAW (type_dup);
12329 SET_DECL_MODE (defn, DECL_MODE (maybe_dup));
12330 TREE_ADDRESSABLE (type) = TREE_ADDRESSABLE (type_dup);
12331 DECL_SIZE (defn) = DECL_SIZE (maybe_dup);
12332 DECL_SIZE_UNIT (defn) = DECL_SIZE_UNIT (maybe_dup);
12333 DECL_ALIGN_RAW (defn) = DECL_ALIGN_RAW (maybe_dup);
12334 DECL_WARN_IF_NOT_ALIGN_RAW (defn)
12335 = DECL_WARN_IF_NOT_ALIGN_RAW (maybe_dup);
12336 DECL_USER_ALIGN (defn) = DECL_USER_ALIGN (maybe_dup);
12337
12338 /* C++ pieces. */
12339 TYPE_POLYMORPHIC_P (type) = TYPE_POLYMORPHIC_P (type_dup);
12340 TYPE_HAS_USER_CONSTRUCTOR (type)
12341 = TYPE_HAS_USER_CONSTRUCTOR (type_dup);
12342 TYPE_HAS_NONTRIVIAL_DESTRUCTOR (type)
12343 = TYPE_HAS_NONTRIVIAL_DESTRUCTOR (type_dup);
12344
12345 if (auto ls = TYPE_LANG_SPECIFIC (type_dup))
12346 {
12347 if (TYPE_LANG_SPECIFIC (type))
12348 {
12349 CLASSTYPE_BEFRIENDING_CLASSES (type_dup)
12350 = CLASSTYPE_BEFRIENDING_CLASSES (type);
12351 if (!ANON_AGGR_TYPE_P (type))
12352 CLASSTYPE_TYPEINFO_VAR (type_dup)
12353 = CLASSTYPE_TYPEINFO_VAR (type);
12354 }
12355 for (tree v = type; v; v = TYPE_NEXT_VARIANT (v))
12356 TYPE_LANG_SPECIFIC (v) = ls;
12357 }
12358 }
12359
12360 TYPE_SIZE (type) = size;
12361 TYPE_SIZE_UNIT (type) = size_unit;
12362
12363 if (fields)
12364 {
12365 tree *chain = &TYPE_FIELDS (type);
12366 unsigned len = fields->length ();
12367 for (unsigned ix = 0; ix != len; ix++)
12368 {
12369 tree decl = (*fields)[ix];
12370
12371 if (!decl)
12372 {
12373 /* An anonymous struct with typedef name. */
12374 tree tdef = (*fields)[ix+1];
12375 decl = TYPE_STUB_DECL (TREE_TYPE (tdef));
12376 gcc_checking_assert (IDENTIFIER_ANON_P (DECL_NAME (decl))
12377 && decl != tdef);
12378 }
12379
12380 gcc_checking_assert (!*chain == !DECL_CLONED_FUNCTION_P (decl));
12381 *chain = decl;
12382 chain = &DECL_CHAIN (decl);
12383
12384 if (TREE_CODE (decl) == FIELD_DECL
12385 && ANON_AGGR_TYPE_P (TREE_TYPE (decl)))
12386 {
12387 tree anon_type = TYPE_MAIN_VARIANT (TREE_TYPE (decl));
12388 if (DECL_NAME (defn) == as_base_identifier)
12389 /* ANON_AGGR_TYPE_FIELD should already point to the
12390 original FIELD_DECL; don't overwrite it to point
12391 to the as-base FIELD_DECL copy. */
12392 gcc_checking_assert (ANON_AGGR_TYPE_FIELD (anon_type));
12393 else
12394 ANON_AGGR_TYPE_FIELD (anon_type) = decl;
12395 }
12396
12397 if (TREE_CODE (decl) == USING_DECL
12398 && TREE_CODE (USING_DECL_SCOPE (decl)) == RECORD_TYPE)
12399 {
12400 /* Reconstruct DECL_ACCESS. */
12401 tree decls = USING_DECL_DECLS (decl);
12402 tree access = declared_access (decl);
12403
12404 for (ovl_iterator iter (decls); iter; ++iter)
12405 {
12406 tree d = *iter;
12407
12408 retrofit_lang_decl (d);
12409 tree list = DECL_ACCESS (d);
12410
12411 if (!purpose_member (type, list))
12412 DECL_ACCESS (d) = tree_cons (type, access, list);
12413 }
12414 }
12415 }
12416 }
12417
12418 TYPE_VFIELD (type) = vfield;
12419 TYPE_BINFO (type) = binfo;
12420
12421 if (TYPE_LANG_SPECIFIC (type))
12422 {
12423 CLASSTYPE_LAMBDA_EXPR (type) = lambda;
12424
12425 CLASSTYPE_MEMBER_VEC (type) = member_vec;
12426 CLASSTYPE_PURE_VIRTUALS (type) = pure_virts;
12427 CLASSTYPE_VCALL_INDICES (type) = vcall_indices;
12428
12429 CLASSTYPE_KEY_METHOD (type) = key_method;
12430
12431 CLASSTYPE_VBASECLASSES (type) = vbase_vec;
12432
12433 /* Resort the member vector. */
12434 resort_type_member_vec (member_vec, NULL, nop, NULL);
12435 }
12436 }
12437 else if (maybe_dup)
12438 {
12439 // FIXME:QOI Check matching defn
12440 }
12441
12442 if (TYPE_LANG_SPECIFIC (type))
12443 {
12444 tree primary = tree_node ();
12445 tree as_base = tree_node ();
12446
12447 if (as_base)
12448 as_base = TREE_TYPE (as_base);
12449
12450 /* Read the vtables. */
12451 vec<tree, va_heap> *vtables = vec_chained_decls ();
12452 if (vtables)
12453 {
12454 unsigned len = vtables->length ();
12455 for (unsigned ix = 0; ix != len; ix++)
12456 {
12457 tree vtable = (*vtables)[ix];
12458 read_var_def (decl: vtable, maybe_template: vtable);
12459 }
12460 }
12461
12462 tree friend_classes = tree_list (has_purpose: false);
12463 tree friend_functions = NULL_TREE;
12464 for (tree *chain = &friend_functions;
12465 tree name = tree_node (); chain = &TREE_CHAIN (*chain))
12466 {
12467 tree val = tree_list (has_purpose: false);
12468 *chain = build_tree_list (name, val);
12469 }
12470 tree decl_list = tree_list (has_purpose: true);
12471
12472 if (installing)
12473 {
12474 CLASSTYPE_PRIMARY_BINFO (type) = primary;
12475 CLASSTYPE_AS_BASE (type) = as_base;
12476
12477 if (vtables)
12478 {
12479 if (!CLASSTYPE_KEY_METHOD (type)
12480 /* Sneaky user may have defined it inline
12481 out-of-class. */
12482 || DECL_DECLARED_INLINE_P (CLASSTYPE_KEY_METHOD (type)))
12483 vec_safe_push (v&: keyed_classes, obj: type);
12484 unsigned len = vtables->length ();
12485 tree *chain = &CLASSTYPE_VTABLES (type);
12486 for (unsigned ix = 0; ix != len; ix++)
12487 {
12488 tree vtable = (*vtables)[ix];
12489 gcc_checking_assert (!*chain);
12490 *chain = vtable;
12491 chain = &DECL_CHAIN (vtable);
12492 }
12493 }
12494 CLASSTYPE_FRIEND_CLASSES (type) = friend_classes;
12495 DECL_FRIENDLIST (defn) = friend_functions;
12496 CLASSTYPE_DECL_LIST (type) = decl_list;
12497
12498 for (; friend_classes; friend_classes = TREE_CHAIN (friend_classes))
12499 {
12500 tree f = TREE_VALUE (friend_classes);
12501
12502 if (CLASS_TYPE_P (f))
12503 {
12504 CLASSTYPE_BEFRIENDING_CLASSES (f)
12505 = tree_cons (NULL_TREE, type,
12506 CLASSTYPE_BEFRIENDING_CLASSES (f));
12507 dump () && dump ("Class %N befriending %C:%N",
12508 type, TREE_CODE (f), f);
12509 }
12510 }
12511
12512 for (; friend_functions;
12513 friend_functions = TREE_CHAIN (friend_functions))
12514 for (tree friend_decls = TREE_VALUE (friend_functions);
12515 friend_decls; friend_decls = TREE_CHAIN (friend_decls))
12516 {
12517 tree f = TREE_VALUE (friend_decls);
12518
12519 DECL_BEFRIENDING_CLASSES (f)
12520 = tree_cons (NULL_TREE, type, DECL_BEFRIENDING_CLASSES (f));
12521 dump () && dump ("Class %N befriending %C:%N",
12522 type, TREE_CODE (f), f);
12523 }
12524 }
12525
12526 if (TYPE_CONTAINS_VPTR_P (type))
12527 /* Read and install the thunks. */
12528 while (tree vfunc = tree_node ())
12529 {
12530 tree thunks = chained_decls ();
12531 if (installing)
12532 SET_DECL_THUNKS (vfunc, thunks);
12533 }
12534
12535 vec_free (v&: vtables);
12536 }
12537
12538 /* Propagate to all variants. */
12539 if (installing)
12540 fixup_type_variants (type);
12541
12542 /* IS_FAKE_BASE_TYPE is inaccurate at this point, because if this is
12543 the fake base, we've not hooked it into the containing class's
12544 data structure yet. Fortunately it has a unique name. */
12545 if (installing
12546 && DECL_NAME (defn) != as_base_identifier
12547 && (!CLASSTYPE_TEMPLATE_INFO (type)
12548 || !uses_template_parms (TI_ARGS (CLASSTYPE_TEMPLATE_INFO (type)))))
12549 /* Emit debug info. It'd be nice to know if the interface TU
12550 already emitted this. */
12551 rest_of_type_compilation (type, !LOCAL_CLASS_P (type));
12552
12553 vec_free (v&: fields);
12554
12555 return !get_overrun ();
12556}
12557
12558void
12559trees_out::write_enum_def (tree decl)
12560{
12561 tree type = TREE_TYPE (decl);
12562
12563 tree_node (TYPE_VALUES (type));
12564 /* Note that we stream TYPE_MIN/MAX_VALUE directly as part of the
12565 ENUMERAL_TYPE. */
12566}
12567
12568void
12569trees_out::mark_enum_def (tree decl)
12570{
12571 tree type = TREE_TYPE (decl);
12572
12573 for (tree values = TYPE_VALUES (type); values; values = TREE_CHAIN (values))
12574 {
12575 tree cst = TREE_VALUE (values);
12576 mark_by_value (decl: cst);
12577 /* We must mark the init to avoid circularity in tt_enum_int. */
12578 if (tree init = DECL_INITIAL (cst))
12579 if (TREE_CODE (init) == INTEGER_CST)
12580 mark_by_value (decl: init);
12581 }
12582}
12583
12584bool
12585trees_in::read_enum_def (tree defn, tree maybe_template)
12586{
12587 tree type = TREE_TYPE (defn);
12588 tree values = tree_node ();
12589
12590 if (get_overrun ())
12591 return false;
12592
12593 tree maybe_dup = odr_duplicate (maybe_existing: maybe_template, TYPE_VALUES (type));
12594 bool installing = maybe_dup && !TYPE_VALUES (type);
12595
12596 if (installing)
12597 {
12598 TYPE_VALUES (type) = values;
12599 /* Note that we stream TYPE_MIN/MAX_VALUE directly as part of the
12600 ENUMERAL_TYPE. */
12601
12602 rest_of_type_compilation (type, DECL_NAMESPACE_SCOPE_P (defn));
12603 }
12604 else if (maybe_dup)
12605 {
12606 tree known = TYPE_VALUES (type);
12607 for (; known && values;
12608 known = TREE_CHAIN (known), values = TREE_CHAIN (values))
12609 {
12610 tree known_decl = TREE_VALUE (known);
12611 tree new_decl = TREE_VALUE (values);
12612
12613 if (DECL_NAME (known_decl) != DECL_NAME (new_decl))
12614 break;
12615
12616 new_decl = maybe_duplicate (decl: new_decl);
12617
12618 if (!cp_tree_equal (DECL_INITIAL (known_decl),
12619 DECL_INITIAL (new_decl)))
12620 break;
12621 }
12622
12623 if (known || values)
12624 {
12625 error_at (DECL_SOURCE_LOCATION (maybe_dup),
12626 "definition of %qD does not match", maybe_dup);
12627 inform (DECL_SOURCE_LOCATION (defn),
12628 "existing definition %qD", defn);
12629
12630 tree known_decl = NULL_TREE, new_decl = NULL_TREE;
12631
12632 if (known)
12633 known_decl = TREE_VALUE (known);
12634 if (values)
12635 new_decl = maybe_duplicate (TREE_VALUE (values));
12636
12637 if (known_decl && new_decl)
12638 {
12639 inform (DECL_SOURCE_LOCATION (new_decl),
12640 "... this enumerator %qD", new_decl);
12641 inform (DECL_SOURCE_LOCATION (known_decl),
12642 "enumerator %qD does not match ...", known_decl);
12643 }
12644 else if (known_decl || new_decl)
12645 {
12646 tree extra = known_decl ? known_decl : new_decl;
12647 inform (DECL_SOURCE_LOCATION (extra),
12648 "additional enumerators beginning with %qD", extra);
12649 }
12650 else
12651 inform (DECL_SOURCE_LOCATION (maybe_dup),
12652 "enumeration range differs");
12653
12654 /* Mark it bad. */
12655 unmatched_duplicate (existing: maybe_template);
12656 }
12657 }
12658
12659 return true;
12660}
12661
12662/* Write out the body of DECL. See above circularity note. */
12663
12664void
12665trees_out::write_definition (tree decl)
12666{
12667 if (streaming_p ())
12668 {
12669 assert_definition (decl);
12670 dump ()
12671 && dump ("Writing definition %C:%N", TREE_CODE (decl), decl);
12672 }
12673 else
12674 dump (dumper::DEPEND)
12675 && dump ("Depending definition %C:%N", TREE_CODE (decl), decl);
12676
12677 again:
12678 switch (TREE_CODE (decl))
12679 {
12680 default:
12681 gcc_unreachable ();
12682
12683 case TEMPLATE_DECL:
12684 decl = DECL_TEMPLATE_RESULT (decl);
12685 goto again;
12686
12687 case FUNCTION_DECL:
12688 write_function_def (decl);
12689 break;
12690
12691 case TYPE_DECL:
12692 {
12693 tree type = TREE_TYPE (decl);
12694 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12695 && TYPE_NAME (type) == decl);
12696 if (TREE_CODE (type) == ENUMERAL_TYPE)
12697 write_enum_def (decl);
12698 else
12699 write_class_def (defn: decl);
12700 }
12701 break;
12702
12703 case VAR_DECL:
12704 case CONCEPT_DECL:
12705 write_var_def (decl);
12706 break;
12707 }
12708}
12709
12710/* Mark a declaration for by-value walking. If DO_DEFN is true, mark
12711 its body too. */
12712
12713void
12714trees_out::mark_declaration (tree decl, bool do_defn)
12715{
12716 mark_by_value (decl);
12717
12718 if (TREE_CODE (decl) == TEMPLATE_DECL)
12719 decl = DECL_TEMPLATE_RESULT (decl);
12720
12721 if (!do_defn)
12722 return;
12723
12724 switch (TREE_CODE (decl))
12725 {
12726 default:
12727 gcc_unreachable ();
12728
12729 case FUNCTION_DECL:
12730 mark_function_def (decl);
12731 break;
12732
12733 case TYPE_DECL:
12734 {
12735 tree type = TREE_TYPE (decl);
12736 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12737 && TYPE_NAME (type) == decl);
12738 if (TREE_CODE (type) == ENUMERAL_TYPE)
12739 mark_enum_def (decl);
12740 else
12741 mark_class_def (defn: decl);
12742 }
12743 break;
12744
12745 case VAR_DECL:
12746 case CONCEPT_DECL:
12747 mark_var_def (decl);
12748 break;
12749 }
12750}
12751
12752/* Read in the body of DECL. See above circularity note. */
12753
12754bool
12755trees_in::read_definition (tree decl)
12756{
12757 dump () && dump ("Reading definition %C %N", TREE_CODE (decl), decl);
12758
12759 tree maybe_template = decl;
12760
12761 again:
12762 switch (TREE_CODE (decl))
12763 {
12764 default:
12765 break;
12766
12767 case TEMPLATE_DECL:
12768 decl = DECL_TEMPLATE_RESULT (decl);
12769 goto again;
12770
12771 case FUNCTION_DECL:
12772 return read_function_def (decl, maybe_template);
12773
12774 case TYPE_DECL:
12775 {
12776 tree type = TREE_TYPE (decl);
12777 gcc_assert (TYPE_MAIN_VARIANT (type) == type
12778 && TYPE_NAME (type) == decl);
12779 if (TREE_CODE (type) == ENUMERAL_TYPE)
12780 return read_enum_def (defn: decl, maybe_template);
12781 else
12782 return read_class_def (defn: decl, maybe_template);
12783 }
12784 break;
12785
12786 case VAR_DECL:
12787 case CONCEPT_DECL:
12788 return read_var_def (decl, maybe_template);
12789 }
12790
12791 return false;
12792}
12793
12794/* Lookup an maybe insert a slot for depset for KEY. */
12795
12796depset **
12797depset::hash::entity_slot (tree entity, bool insert)
12798{
12799 traits::compare_type key (entity, NULL);
12800 depset **slot = find_slot_with_hash (comparable: key, hash: traits::hash (p: key),
12801 insert: insert ? INSERT : NO_INSERT);
12802
12803 return slot;
12804}
12805
12806depset **
12807depset::hash::binding_slot (tree ctx, tree name, bool insert)
12808{
12809 traits::compare_type key (ctx, name);
12810 depset **slot = find_slot_with_hash (comparable: key, hash: traits::hash (p: key),
12811 insert: insert ? INSERT : NO_INSERT);
12812
12813 return slot;
12814}
12815
12816depset *
12817depset::hash::find_dependency (tree decl)
12818{
12819 depset **slot = entity_slot (entity: decl, insert: false);
12820
12821 return slot ? *slot : NULL;
12822}
12823
12824depset *
12825depset::hash::find_binding (tree ctx, tree name)
12826{
12827 depset **slot = binding_slot (ctx, name, insert: false);
12828
12829 return slot ? *slot : NULL;
12830}
12831
12832/* DECL is a newly discovered dependency. Create the depset, if it
12833 doesn't already exist. Add it to the worklist if so.
12834
12835 DECL will be an OVL_USING_P OVERLOAD, if it's from a binding that's
12836 a using decl.
12837
12838 We do not have to worry about adding the same dependency more than
12839 once. First it's harmless, but secondly the TREE_VISITED marking
12840 prevents us wanting to do it anyway. */
12841
12842depset *
12843depset::hash::make_dependency (tree decl, entity_kind ek)
12844{
12845 /* Make sure we're being told consistent information. */
12846 gcc_checking_assert ((ek == EK_NAMESPACE)
12847 == (TREE_CODE (decl) == NAMESPACE_DECL
12848 && !DECL_NAMESPACE_ALIAS (decl)));
12849 gcc_checking_assert (ek != EK_BINDING && ek != EK_REDIRECT);
12850 gcc_checking_assert (TREE_CODE (decl) != FIELD_DECL
12851 && (TREE_CODE (decl) != USING_DECL
12852 || TREE_CODE (DECL_CONTEXT (decl)) == FUNCTION_DECL));
12853 gcc_checking_assert (!is_key_order ());
12854 if (ek == EK_USING)
12855 gcc_checking_assert (TREE_CODE (decl) == OVERLOAD);
12856
12857 if (TREE_CODE (decl) == TEMPLATE_DECL)
12858 /* The template should have copied these from its result decl. */
12859 gcc_checking_assert (DECL_MODULE_EXPORT_P (decl)
12860 == DECL_MODULE_EXPORT_P (DECL_TEMPLATE_RESULT (decl)));
12861
12862 depset **slot = entity_slot (entity: decl, insert: true);
12863 depset *dep = *slot;
12864 bool for_binding = ek == EK_FOR_BINDING;
12865
12866 if (!dep)
12867 {
12868 if ((DECL_IMPLICIT_TYPEDEF_P (decl)
12869 /* ... not an enum, for instance. */
12870 && RECORD_OR_UNION_TYPE_P (TREE_TYPE (decl))
12871 && TYPE_LANG_SPECIFIC (TREE_TYPE (decl))
12872 && CLASSTYPE_USE_TEMPLATE (TREE_TYPE (decl)) == 2)
12873 || (VAR_P (decl)
12874 && DECL_LANG_SPECIFIC (decl)
12875 && DECL_USE_TEMPLATE (decl) == 2))
12876 {
12877 /* A partial or explicit specialization. Partial
12878 specializations might not be in the hash table, because
12879 there can be multiple differently-constrained variants.
12880
12881 template<typename T> class silly;
12882 template<typename T> requires true class silly {};
12883
12884 We need to find them, insert their TEMPLATE_DECL in the
12885 dep_hash, and then convert the dep we just found into a
12886 redirect. */
12887
12888 tree ti = get_template_info (decl);
12889 tree tmpl = TI_TEMPLATE (ti);
12890 tree partial = NULL_TREE;
12891 for (tree spec = DECL_TEMPLATE_SPECIALIZATIONS (tmpl);
12892 spec; spec = TREE_CHAIN (spec))
12893 if (DECL_TEMPLATE_RESULT (TREE_VALUE (spec)) == decl)
12894 {
12895 partial = TREE_VALUE (spec);
12896 break;
12897 }
12898
12899 if (partial)
12900 {
12901 /* Eagerly create an empty redirect. The following
12902 make_dependency call could cause hash reallocation,
12903 and invalidate slot's value. */
12904 depset *redirect = make_entity (entity: decl, ek: EK_REDIRECT);
12905
12906 /* Redirects are never reached -- always snap to their target. */
12907 redirect->set_flag_bit<DB_UNREACHED_BIT> ();
12908
12909 *slot = redirect;
12910
12911 depset *tmpl_dep = make_dependency (decl: partial, ek: EK_PARTIAL);
12912 gcc_checking_assert (tmpl_dep->get_entity_kind () == EK_PARTIAL);
12913
12914 redirect->deps.safe_push (obj: tmpl_dep);
12915
12916 return redirect;
12917 }
12918 }
12919
12920 bool has_def = ek != EK_USING && has_definition (decl);
12921 if (ek > EK_BINDING)
12922 ek = EK_DECL;
12923
12924 /* The only OVERLOADS we should see are USING decls from
12925 bindings. */
12926 *slot = dep = make_entity (entity: decl, ek, is_defn: has_def);
12927
12928 if (CHECKING_P && TREE_CODE (decl) == TEMPLATE_DECL)
12929 /* The template_result should otherwise not be in the
12930 table, or be an empty redirect (created above). */
12931 if (auto *eslot = entity_slot (DECL_TEMPLATE_RESULT (decl), insert: false))
12932 gcc_checking_assert ((*eslot)->get_entity_kind () == EK_REDIRECT
12933 && !(*eslot)->deps.length ());
12934
12935 if (ek != EK_USING)
12936 {
12937 tree not_tmpl = STRIP_TEMPLATE (decl);
12938
12939 if (DECL_LANG_SPECIFIC (not_tmpl)
12940 && DECL_MODULE_IMPORT_P (not_tmpl))
12941 {
12942 /* Store the module number and index in cluster/section,
12943 so we don't have to look them up again. */
12944 unsigned index = import_entity_index (decl);
12945 module_state *from = import_entity_module (index);
12946 /* Remap will be zero for imports from partitions, which
12947 we want to treat as-if declared in this TU. */
12948 if (from->remap)
12949 {
12950 dep->cluster = index - from->entity_lwm;
12951 dep->section = from->remap;
12952 dep->set_flag_bit<DB_IMPORTED_BIT> ();
12953 }
12954 }
12955
12956 if (ek == EK_DECL
12957 && !dep->is_import ()
12958 && TREE_CODE (CP_DECL_CONTEXT (decl)) == NAMESPACE_DECL
12959 && !(TREE_CODE (decl) == TEMPLATE_DECL
12960 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl)))
12961 {
12962 tree ctx = CP_DECL_CONTEXT (decl);
12963
12964 if (!TREE_PUBLIC (ctx))
12965 /* Member of internal namespace. */
12966 dep->set_flag_bit<DB_IS_INTERNAL_BIT> ();
12967 else if (VAR_OR_FUNCTION_DECL_P (not_tmpl)
12968 && DECL_THIS_STATIC (not_tmpl))
12969 {
12970 /* An internal decl. This is ok in a GM entity. */
12971 if (!(header_module_p ()
12972 || !DECL_LANG_SPECIFIC (not_tmpl)
12973 || !DECL_MODULE_PURVIEW_P (not_tmpl)))
12974 dep->set_flag_bit<DB_IS_INTERNAL_BIT> ();
12975 }
12976 }
12977 }
12978
12979 if (!dep->is_import ())
12980 worklist.safe_push (obj: dep);
12981 }
12982
12983 dump (dumper::DEPEND)
12984 && dump ("%s on %s %C:%N found",
12985 ek == EK_REDIRECT ? "Redirect"
12986 : for_binding ? "Binding" : "Dependency",
12987 dep->entity_kind_name (), TREE_CODE (decl), decl);
12988
12989 return dep;
12990}
12991
12992/* DEP is a newly discovered dependency. Append it to current's
12993 depset. */
12994
12995void
12996depset::hash::add_dependency (depset *dep)
12997{
12998 gcc_checking_assert (current && !is_key_order ());
12999 current->deps.safe_push (obj: dep);
13000
13001 if (dep->is_internal () && !current->is_internal ())
13002 current->set_flag_bit<DB_REFS_INTERNAL_BIT> ();
13003
13004 if (current->get_entity_kind () == EK_USING
13005 && DECL_IMPLICIT_TYPEDEF_P (dep->get_entity ())
13006 && TREE_CODE (TREE_TYPE (dep->get_entity ())) == ENUMERAL_TYPE)
13007 {
13008 /* CURRENT is an unwrapped using-decl and DECL is an enum's
13009 implicit typedef. Is CURRENT a member of the enum? */
13010 tree c_decl = OVL_FUNCTION (current->get_entity ());
13011
13012 if (TREE_CODE (c_decl) == CONST_DECL
13013 && (current->deps[0]->get_entity ()
13014 == CP_DECL_CONTEXT (dep->get_entity ())))
13015 /* Make DECL depend on CURRENT. */
13016 dep->deps.safe_push (obj: current);
13017 }
13018
13019 if (dep->is_unreached ())
13020 {
13021 /* The dependency is reachable now. */
13022 reached_unreached = true;
13023 dep->clear_flag_bit<DB_UNREACHED_BIT> ();
13024 dump (dumper::DEPEND)
13025 && dump ("Reaching unreached %s %C:%N", dep->entity_kind_name (),
13026 TREE_CODE (dep->get_entity ()), dep->get_entity ());
13027 }
13028}
13029
13030depset *
13031depset::hash::add_dependency (tree decl, entity_kind ek)
13032{
13033 depset *dep;
13034
13035 if (is_key_order ())
13036 {
13037 dep = find_dependency (decl);
13038 if (dep)
13039 {
13040 current->deps.safe_push (obj: dep);
13041 dump (dumper::MERGE)
13042 && dump ("Key dependency on %s %C:%N found",
13043 dep->entity_kind_name (), TREE_CODE (decl), decl);
13044 }
13045 else
13046 {
13047 /* It's not a mergeable decl, look for it in the original
13048 table. */
13049 dep = chain->find_dependency (decl);
13050 gcc_checking_assert (dep);
13051 }
13052 }
13053 else
13054 {
13055 dep = make_dependency (decl, ek);
13056 if (dep->get_entity_kind () != EK_REDIRECT)
13057 add_dependency (dep);
13058 }
13059
13060 return dep;
13061}
13062
13063void
13064depset::hash::add_namespace_context (depset *dep, tree ns)
13065{
13066 depset *ns_dep = make_dependency (decl: ns, ek: depset::EK_NAMESPACE);
13067 dep->deps.safe_push (obj: ns_dep);
13068
13069 /* Mark it as special if imported so we don't walk connect when
13070 SCCing. */
13071 if (!dep->is_binding () && ns_dep->is_import ())
13072 dep->set_special ();
13073}
13074
13075struct add_binding_data
13076{
13077 tree ns;
13078 bitmap partitions;
13079 depset *binding;
13080 depset::hash *hash;
13081 bool met_namespace;
13082};
13083
13084/* Return true if we are, or contain something that is exported. */
13085
13086bool
13087depset::hash::add_binding_entity (tree decl, WMB_Flags flags, void *data_)
13088{
13089 auto data = static_cast <add_binding_data *> (data_);
13090
13091 if (!(TREE_CODE (decl) == NAMESPACE_DECL && !DECL_NAMESPACE_ALIAS (decl)))
13092 {
13093 tree inner = decl;
13094
13095 if (TREE_CODE (inner) == CONST_DECL
13096 && TREE_CODE (DECL_CONTEXT (inner)) == ENUMERAL_TYPE)
13097 inner = TYPE_NAME (DECL_CONTEXT (inner));
13098 else if (TREE_CODE (inner) == TEMPLATE_DECL)
13099 inner = DECL_TEMPLATE_RESULT (inner);
13100
13101 if ((!DECL_LANG_SPECIFIC (inner) || !DECL_MODULE_PURVIEW_P (inner))
13102 && !((flags & WMB_Using) && (flags & WMB_Export)))
13103 /* Ignore global module fragment entities unless explicitly
13104 exported with a using declaration. */
13105 return false;
13106
13107 if (VAR_OR_FUNCTION_DECL_P (inner)
13108 && DECL_THIS_STATIC (inner))
13109 {
13110 if (!header_module_p ())
13111 /* Ignore internal-linkage entitites. */
13112 return false;
13113 }
13114
13115 if ((TREE_CODE (decl) == VAR_DECL
13116 || TREE_CODE (decl) == TYPE_DECL)
13117 && DECL_TINFO_P (decl))
13118 /* Ignore TINFO things. */
13119 return false;
13120
13121 if (TREE_CODE (decl) == VAR_DECL && DECL_NTTP_OBJECT_P (decl))
13122 /* Ignore NTTP objects. */
13123 return false;
13124
13125 if (!(flags & WMB_Using) && CP_DECL_CONTEXT (decl) != data->ns)
13126 {
13127 /* A using that lost its wrapper or an unscoped enum
13128 constant. */
13129 flags = WMB_Flags (flags | WMB_Using);
13130 if (DECL_MODULE_EXPORT_P (TREE_CODE (decl) == CONST_DECL
13131 ? TYPE_NAME (TREE_TYPE (decl))
13132 : STRIP_TEMPLATE (decl)))
13133 flags = WMB_Flags (flags | WMB_Export);
13134 }
13135
13136 if (!data->binding)
13137 /* No binding to check. */;
13138 else if (flags & WMB_Using)
13139 {
13140 /* Look in the binding to see if we already have this
13141 using. */
13142 for (unsigned ix = data->binding->deps.length (); --ix;)
13143 {
13144 depset *d = data->binding->deps[ix];
13145 if (d->get_entity_kind () == EK_USING
13146 && OVL_FUNCTION (d->get_entity ()) == decl)
13147 {
13148 if (!(flags & WMB_Hidden))
13149 d->clear_hidden_binding ();
13150 if (flags & WMB_Export)
13151 OVL_EXPORT_P (d->get_entity ()) = true;
13152 return bool (flags & WMB_Export);
13153 }
13154 }
13155 }
13156 else if (flags & WMB_Dups)
13157 {
13158 /* Look in the binding to see if we already have this decl. */
13159 for (unsigned ix = data->binding->deps.length (); --ix;)
13160 {
13161 depset *d = data->binding->deps[ix];
13162 if (d->get_entity () == decl)
13163 {
13164 if (!(flags & WMB_Hidden))
13165 d->clear_hidden_binding ();
13166 return false;
13167 }
13168 }
13169 }
13170
13171 /* We're adding something. */
13172 if (!data->binding)
13173 {
13174 data->binding = make_binding (ns: data->ns, DECL_NAME (decl));
13175 data->hash->add_namespace_context (dep: data->binding, ns: data->ns);
13176
13177 depset **slot = data->hash->binding_slot (ctx: data->ns,
13178 DECL_NAME (decl), insert: true);
13179 gcc_checking_assert (!*slot);
13180 *slot = data->binding;
13181 }
13182
13183 /* Make sure nobody left a tree visited lying about. */
13184 gcc_checking_assert (!TREE_VISITED (decl));
13185
13186 if (flags & WMB_Using)
13187 {
13188 decl = ovl_make (fn: decl, NULL_TREE);
13189 if (flags & WMB_Export)
13190 OVL_EXPORT_P (decl) = true;
13191 }
13192
13193 depset *dep = data->hash->make_dependency
13194 (decl, ek: flags & WMB_Using ? EK_USING : EK_FOR_BINDING);
13195 if (flags & WMB_Hidden)
13196 dep->set_hidden_binding ();
13197 data->binding->deps.safe_push (obj: dep);
13198 /* Binding and contents are mutually dependent. */
13199 dep->deps.safe_push (obj: data->binding);
13200
13201 return (flags & WMB_Using
13202 ? flags & WMB_Export : DECL_MODULE_EXPORT_P (decl));
13203 }
13204 else if (DECL_NAME (decl) && !data->met_namespace)
13205 {
13206 /* Namespace, walk exactly once. */
13207 gcc_checking_assert (TREE_PUBLIC (decl));
13208 data->met_namespace = true;
13209 if (data->hash->add_namespace_entities (ns: decl, partitions: data->partitions))
13210 {
13211 /* It contains an exported thing, so it is exported. */
13212 gcc_checking_assert (DECL_MODULE_PURVIEW_P (decl));
13213 DECL_MODULE_EXPORT_P (decl) = true;
13214 }
13215
13216 if (DECL_MODULE_PURVIEW_P (decl))
13217 {
13218 data->hash->make_dependency (decl, ek: depset::EK_NAMESPACE);
13219
13220 return DECL_MODULE_EXPORT_P (decl);
13221 }
13222 }
13223
13224 return false;
13225}
13226
13227/* Recursively find all the namespace bindings of NS. Add a depset
13228 for every binding that contains an export or module-linkage entity.
13229 Add a defining depset for every such decl that we need to write a
13230 definition. Such defining depsets depend on the binding depset.
13231 Returns true if we contain something exported. */
13232
13233bool
13234depset::hash::add_namespace_entities (tree ns, bitmap partitions)
13235{
13236 dump () && dump ("Looking for writables in %N", ns);
13237 dump.indent ();
13238
13239 unsigned count = 0;
13240 add_binding_data data;
13241 data.ns = ns;
13242 data.partitions = partitions;
13243 data.hash = this;
13244
13245 hash_table<named_decl_hash>::iterator end
13246 (DECL_NAMESPACE_BINDINGS (ns)->end ());
13247 for (hash_table<named_decl_hash>::iterator iter
13248 (DECL_NAMESPACE_BINDINGS (ns)->begin ()); iter != end; ++iter)
13249 {
13250 data.binding = nullptr;
13251 data.met_namespace = false;
13252 if (walk_module_binding (binding: *iter, partitions, add_binding_entity, data: &data))
13253 count++;
13254 }
13255
13256 if (count)
13257 dump () && dump ("Found %u entries", count);
13258 dump.outdent ();
13259
13260 return count != 0;
13261}
13262
13263void
13264depset::hash::add_partial_entities (vec<tree, va_gc> *partial_classes)
13265{
13266 for (unsigned ix = 0; ix != partial_classes->length (); ix++)
13267 {
13268 tree inner = (*partial_classes)[ix];
13269
13270 depset *dep = make_dependency (decl: inner, ek: depset::EK_DECL);
13271
13272 if (dep->get_entity_kind () == depset::EK_REDIRECT)
13273 /* We should have recorded the template as a partial
13274 specialization. */
13275 gcc_checking_assert (dep->deps[0]->get_entity_kind ()
13276 == depset::EK_PARTIAL);
13277 else
13278 /* It was an explicit specialization, not a partial one. */
13279 gcc_checking_assert (dep->get_entity_kind ()
13280 == depset::EK_SPECIALIZATION);
13281 }
13282}
13283
13284/* Add the members of imported classes that we defined in this TU.
13285 This will also include lazily created implicit member function
13286 declarations. (All others will be definitions.) */
13287
13288void
13289depset::hash::add_class_entities (vec<tree, va_gc> *class_members)
13290{
13291 for (unsigned ix = 0; ix != class_members->length (); ix++)
13292 {
13293 tree defn = (*class_members)[ix];
13294 depset *dep = make_dependency (decl: defn, ek: EK_INNER_DECL);
13295
13296 if (dep->get_entity_kind () == EK_REDIRECT)
13297 dep = dep->deps[0];
13298
13299 /* Only non-instantiations need marking as members. */
13300 if (dep->get_entity_kind () == EK_DECL)
13301 dep->set_flag_bit <DB_IS_MEMBER_BIT> ();
13302 }
13303}
13304
13305/* We add the partial & explicit specializations, and the explicit
13306 instantiations. */
13307
13308static void
13309specialization_add (bool decl_p, spec_entry *entry, void *data_)
13310{
13311 vec<spec_entry *> *data = reinterpret_cast <vec<spec_entry *> *> (data_);
13312
13313 if (!decl_p)
13314 {
13315 /* We exclusively use decls to locate things. Make sure there's
13316 no mismatch between the two specialization tables we keep.
13317 pt.cc optimizes instantiation lookup using a complicated
13318 heuristic. We don't attempt to replicate that algorithm, but
13319 observe its behaviour and reproduce it upon read back. */
13320
13321 gcc_checking_assert (TREE_CODE (entry->spec) == ENUMERAL_TYPE
13322 || DECL_CLASS_TEMPLATE_P (entry->tmpl));
13323
13324 gcc_checking_assert (!match_mergeable_specialization (true, entry));
13325 }
13326 else if (VAR_OR_FUNCTION_DECL_P (entry->spec))
13327 gcc_checking_assert (!DECL_LOCAL_DECL_P (entry->spec));
13328
13329 data->safe_push (obj: entry);
13330}
13331
13332/* Arbitrary stable comparison. */
13333
13334static int
13335specialization_cmp (const void *a_, const void *b_)
13336{
13337 const spec_entry *ea = *reinterpret_cast<const spec_entry *const *> (a_);
13338 const spec_entry *eb = *reinterpret_cast<const spec_entry *const *> (b_);
13339
13340 if (ea == eb)
13341 return 0;
13342
13343 tree a = ea->spec;
13344 tree b = eb->spec;
13345 if (TYPE_P (a))
13346 {
13347 a = TYPE_NAME (a);
13348 b = TYPE_NAME (b);
13349 }
13350
13351 if (a == b)
13352 /* This can happen with friend specializations. Just order by
13353 entry address. See note in depset_cmp. */
13354 return ea < eb ? -1 : +1;
13355
13356 return DECL_UID (a) < DECL_UID (b) ? -1 : +1;
13357}
13358
13359/* We add all kinds of specialializations. Implicit specializations
13360 should only streamed and walked if they are reachable from
13361 elsewhere. Hence the UNREACHED flag. This is making the
13362 assumption that it is cheaper to reinstantiate them on demand
13363 elsewhere, rather than stream them in when we instantiate their
13364 general template. Also, if we do stream them, we can only do that
13365 if they are not internal (which they can become if they themselves
13366 touch an internal entity?). */
13367
13368void
13369depset::hash::add_specializations (bool decl_p)
13370{
13371 vec<spec_entry *> data;
13372 data.create (nelems: 100);
13373 walk_specializations (decl_p, specialization_add, &data);
13374 data.qsort (specialization_cmp);
13375 while (data.length ())
13376 {
13377 spec_entry *entry = data.pop ();
13378 tree spec = entry->spec;
13379 int use_tpl = 0;
13380 bool is_friend = false;
13381
13382 if (decl_p && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (entry->tmpl))
13383 /* A friend of a template. This is keyed to the
13384 instantiation. */
13385 is_friend = true;
13386
13387 if (decl_p)
13388 {
13389 if (tree ti = DECL_TEMPLATE_INFO (spec))
13390 {
13391 tree tmpl = TI_TEMPLATE (ti);
13392
13393 use_tpl = DECL_USE_TEMPLATE (spec);
13394 if (spec == DECL_TEMPLATE_RESULT (tmpl))
13395 {
13396 spec = tmpl;
13397 gcc_checking_assert (DECL_USE_TEMPLATE (spec) == use_tpl);
13398 }
13399 else if (is_friend)
13400 {
13401 if (TI_TEMPLATE (ti) != entry->tmpl
13402 || !template_args_equal (TI_ARGS (ti), entry->tmpl))
13403 goto template_friend;
13404 }
13405 }
13406 else
13407 {
13408 template_friend:;
13409 gcc_checking_assert (is_friend);
13410 /* This is a friend of a template class, but not the one
13411 that generated entry->spec itself (i.e. it's an
13412 equivalent clone). We do not need to record
13413 this. */
13414 continue;
13415 }
13416 }
13417 else
13418 {
13419 if (TREE_CODE (spec) == ENUMERAL_TYPE)
13420 {
13421 tree ctx = DECL_CONTEXT (TYPE_NAME (spec));
13422
13423 if (TYPE_P (ctx))
13424 use_tpl = CLASSTYPE_USE_TEMPLATE (ctx);
13425 else
13426 use_tpl = DECL_USE_TEMPLATE (ctx);
13427 }
13428 else
13429 use_tpl = CLASSTYPE_USE_TEMPLATE (spec);
13430
13431 tree ti = TYPE_TEMPLATE_INFO (spec);
13432 tree tmpl = TI_TEMPLATE (ti);
13433
13434 spec = TYPE_NAME (spec);
13435 if (spec == DECL_TEMPLATE_RESULT (tmpl))
13436 {
13437 spec = tmpl;
13438 use_tpl = DECL_USE_TEMPLATE (spec);
13439 }
13440 }
13441
13442 bool needs_reaching = false;
13443 if (use_tpl == 1)
13444 /* Implicit instantiations only walked if we reach them. */
13445 needs_reaching = true;
13446 else if (!DECL_LANG_SPECIFIC (STRIP_TEMPLATE (spec))
13447 || !DECL_MODULE_PURVIEW_P (STRIP_TEMPLATE (spec)))
13448 /* Likewise, GMF explicit or partial specializations. */
13449 needs_reaching = true;
13450
13451#if false && CHECKING_P
13452 /* The instantiation isn't always on
13453 DECL_TEMPLATE_INSTANTIATIONS, */
13454 // FIXME: we probably need to remember this information?
13455 /* Verify the specialization is on the
13456 DECL_TEMPLATE_INSTANTIATIONS of the template. */
13457 for (tree cons = DECL_TEMPLATE_INSTANTIATIONS (entry->tmpl);
13458 cons; cons = TREE_CHAIN (cons))
13459 if (TREE_VALUE (cons) == entry->spec)
13460 {
13461 gcc_assert (entry->args == TREE_PURPOSE (cons));
13462 goto have_spec;
13463 }
13464 gcc_unreachable ();
13465 have_spec:;
13466#endif
13467
13468 /* Make sure nobody left a tree visited lying about. */
13469 gcc_checking_assert (!TREE_VISITED (spec));
13470 depset *dep = make_dependency (decl: spec, ek: depset::EK_SPECIALIZATION);
13471 if (dep->is_special ())
13472 gcc_unreachable ();
13473 else
13474 {
13475 if (dep->get_entity_kind () == depset::EK_REDIRECT)
13476 dep = dep->deps[0];
13477 else if (dep->get_entity_kind () == depset::EK_SPECIALIZATION)
13478 {
13479 dep->set_special ();
13480 dep->deps.safe_push (obj: reinterpret_cast<depset *> (entry));
13481 if (!decl_p)
13482 dep->set_flag_bit<DB_TYPE_SPEC_BIT> ();
13483 }
13484
13485 if (needs_reaching)
13486 dep->set_flag_bit<DB_UNREACHED_BIT> ();
13487 if (is_friend)
13488 dep->set_flag_bit<DB_FRIEND_SPEC_BIT> ();
13489 }
13490 }
13491 data.release ();
13492}
13493
13494/* Add a depset into the mergeable hash. */
13495
13496void
13497depset::hash::add_mergeable (depset *mergeable)
13498{
13499 gcc_checking_assert (is_key_order ());
13500 entity_kind ek = mergeable->get_entity_kind ();
13501 tree decl = mergeable->get_entity ();
13502 gcc_checking_assert (ek < EK_DIRECT_HWM);
13503
13504 depset **slot = entity_slot (entity: decl, insert: true);
13505 gcc_checking_assert (!*slot);
13506 depset *dep = make_entity (entity: decl, ek);
13507 *slot = dep;
13508
13509 worklist.safe_push (obj: dep);
13510
13511 /* So we can locate the mergeable depset this depset refers to,
13512 mark the first dep. */
13513 dep->set_special ();
13514 dep->deps.safe_push (obj: mergeable);
13515}
13516
13517/* Find the innermost-namespace scope of DECL, and that
13518 namespace-scope decl. */
13519
13520tree
13521find_pending_key (tree decl, tree *decl_p = nullptr)
13522{
13523 tree ns = decl;
13524 do
13525 {
13526 decl = ns;
13527 ns = CP_DECL_CONTEXT (ns);
13528 if (TYPE_P (ns))
13529 ns = TYPE_NAME (ns);
13530 }
13531 while (TREE_CODE (ns) != NAMESPACE_DECL);
13532
13533 if (decl_p)
13534 *decl_p = decl;
13535
13536 return ns;
13537}
13538
13539/* Iteratively find dependencies. During the walk we may find more
13540 entries on the same binding that need walking. */
13541
13542void
13543depset::hash::find_dependencies (module_state *module)
13544{
13545 trees_out walker (NULL, module, *this);
13546 vec<depset *> unreached;
13547 unreached.create (nelems: worklist.length ());
13548
13549 for (;;)
13550 {
13551 reached_unreached = false;
13552 while (worklist.length ())
13553 {
13554 depset *item = worklist.pop ();
13555
13556 gcc_checking_assert (!item->is_binding ());
13557 if (item->is_unreached ())
13558 unreached.quick_push (obj: item);
13559 else
13560 {
13561 current = item;
13562 tree decl = current->get_entity ();
13563 dump (is_key_order () ? dumper::MERGE : dumper::DEPEND)
13564 && dump ("Dependencies of %s %C:%N",
13565 is_key_order () ? "key-order"
13566 : current->entity_kind_name (), TREE_CODE (decl), decl);
13567 dump.indent ();
13568 walker.begin ();
13569 if (current->get_entity_kind () == EK_USING)
13570 walker.tree_node (OVL_FUNCTION (decl));
13571 else if (TREE_VISITED (decl))
13572 /* A global tree. */;
13573 else if (item->get_entity_kind () == EK_NAMESPACE)
13574 {
13575 module->note_location (DECL_SOURCE_LOCATION (decl));
13576 add_namespace_context (dep: current, CP_DECL_CONTEXT (decl));
13577 }
13578 else
13579 {
13580 walker.mark_declaration (decl, do_defn: current->has_defn ());
13581
13582 if (!walker.is_key_order ()
13583 && (item->get_entity_kind () == EK_SPECIALIZATION
13584 || item->get_entity_kind () == EK_PARTIAL
13585 || (item->get_entity_kind () == EK_DECL
13586 && item->is_member ())))
13587 {
13588 tree ns = find_pending_key (decl, decl_p: nullptr);
13589 add_namespace_context (dep: item, ns);
13590 }
13591
13592 walker.decl_value (decl, dep: current);
13593 if (current->has_defn ())
13594 walker.write_definition (decl);
13595 }
13596 walker.end ();
13597
13598 if (!walker.is_key_order ()
13599 && TREE_CODE (decl) == TEMPLATE_DECL
13600 && !DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
13601 /* Mark all the explicit & partial specializations as
13602 reachable. */
13603 for (tree cons = DECL_TEMPLATE_INSTANTIATIONS (decl);
13604 cons; cons = TREE_CHAIN (cons))
13605 {
13606 tree spec = TREE_VALUE (cons);
13607 if (TYPE_P (spec))
13608 spec = TYPE_NAME (spec);
13609 int use_tpl;
13610 node_template_info (decl: spec, use&: use_tpl);
13611 if (use_tpl & 2)
13612 {
13613 depset *spec_dep = find_dependency (decl: spec);
13614 if (spec_dep->get_entity_kind () == EK_REDIRECT)
13615 spec_dep = spec_dep->deps[0];
13616 if (spec_dep->is_unreached ())
13617 {
13618 reached_unreached = true;
13619 spec_dep->clear_flag_bit<DB_UNREACHED_BIT> ();
13620 dump (dumper::DEPEND)
13621 && dump ("Reaching unreached specialization"
13622 " %C:%N", TREE_CODE (spec), spec);
13623 }
13624 }
13625 }
13626
13627 dump.outdent ();
13628 current = NULL;
13629 }
13630 }
13631
13632 if (!reached_unreached)
13633 break;
13634
13635 /* It's possible the we reached the unreached before we
13636 processed it in the above loop, so we'll be doing this an
13637 extra time. However, to avoid that we have to do some
13638 bit shuffling that also involves a scan of the list.
13639 Swings & roundabouts I guess. */
13640 std::swap (a&: worklist, b&: unreached);
13641 }
13642
13643 unreached.release ();
13644}
13645
13646/* Compare two entries of a single binding. TYPE_DECL before
13647 non-exported before exported. */
13648
13649static int
13650binding_cmp (const void *a_, const void *b_)
13651{
13652 depset *a = *(depset *const *)a_;
13653 depset *b = *(depset *const *)b_;
13654
13655 tree a_ent = a->get_entity ();
13656 tree b_ent = b->get_entity ();
13657 gcc_checking_assert (a_ent != b_ent
13658 && !a->is_binding ()
13659 && !b->is_binding ());
13660
13661 /* Implicit typedefs come first. */
13662 bool a_implicit = DECL_IMPLICIT_TYPEDEF_P (a_ent);
13663 bool b_implicit = DECL_IMPLICIT_TYPEDEF_P (b_ent);
13664 if (a_implicit || b_implicit)
13665 {
13666 /* A binding with two implicit type decls? That's unpossible! */
13667 gcc_checking_assert (!(a_implicit && b_implicit));
13668 return a_implicit ? -1 : +1; /* Implicit first. */
13669 }
13670
13671 /* Hidden before non-hidden. */
13672 bool a_hidden = a->is_hidden ();
13673 bool b_hidden = b->is_hidden ();
13674 if (a_hidden != b_hidden)
13675 return a_hidden ? -1 : +1;
13676
13677 bool a_using = a->get_entity_kind () == depset::EK_USING;
13678 bool a_export;
13679 if (a_using)
13680 {
13681 a_export = OVL_EXPORT_P (a_ent);
13682 a_ent = OVL_FUNCTION (a_ent);
13683 }
13684 else
13685 a_export = DECL_MODULE_EXPORT_P (TREE_CODE (a_ent) == CONST_DECL
13686 ? TYPE_NAME (TREE_TYPE (a_ent))
13687 : STRIP_TEMPLATE (a_ent));
13688
13689 bool b_using = b->get_entity_kind () == depset::EK_USING;
13690 bool b_export;
13691 if (b_using)
13692 {
13693 b_export = OVL_EXPORT_P (b_ent);
13694 b_ent = OVL_FUNCTION (b_ent);
13695 }
13696 else
13697 b_export = DECL_MODULE_EXPORT_P (TREE_CODE (b_ent) == CONST_DECL
13698 ? TYPE_NAME (TREE_TYPE (b_ent))
13699 : STRIP_TEMPLATE (b_ent));
13700
13701 /* Non-exports before exports. */
13702 if (a_export != b_export)
13703 return a_export ? +1 : -1;
13704
13705 /* At this point we don't care, but want a stable sort. */
13706
13707 if (a_using != b_using)
13708 /* using first. */
13709 return a_using? -1 : +1;
13710
13711 return DECL_UID (a_ent) < DECL_UID (b_ent) ? -1 : +1;
13712}
13713
13714/* Sort the bindings, issue errors about bad internal refs. */
13715
13716bool
13717depset::hash::finalize_dependencies ()
13718{
13719 bool ok = true;
13720 depset::hash::iterator end (this->end ());
13721 for (depset::hash::iterator iter (begin ()); iter != end; ++iter)
13722 {
13723 depset *dep = *iter;
13724 if (dep->is_binding ())
13725 {
13726 /* Keep the containing namespace dep first. */
13727 gcc_checking_assert (dep->deps.length () > 1
13728 && (dep->deps[0]->get_entity_kind ()
13729 == EK_NAMESPACE)
13730 && (dep->deps[0]->get_entity ()
13731 == dep->get_entity ()));
13732 if (dep->deps.length () > 2)
13733 gcc_qsort (&dep->deps[1], dep->deps.length () - 1,
13734 sizeof (dep->deps[1]), binding_cmp);
13735 }
13736 else if (dep->refs_internal ())
13737 {
13738 for (unsigned ix = dep->deps.length (); ix--;)
13739 {
13740 depset *rdep = dep->deps[ix];
13741 if (rdep->is_internal ())
13742 {
13743 // FIXME:QOI Better location information? We're
13744 // losing, so it doesn't matter about efficiency
13745 tree decl = dep->get_entity ();
13746 error_at (DECL_SOURCE_LOCATION (decl),
13747 "%q#D references internal linkage entity %q#D",
13748 decl, rdep->get_entity ());
13749 break;
13750 }
13751 }
13752 ok = false;
13753 }
13754 }
13755
13756 return ok;
13757}
13758
13759/* Core of TARJAN's algorithm to find Strongly Connected Components
13760 within a graph. See https://en.wikipedia.org/wiki/
13761 Tarjan%27s_strongly_connected_components_algorithm for details.
13762
13763 We use depset::section as lowlink. Completed nodes have
13764 depset::cluster containing the cluster number, with the top
13765 bit set.
13766
13767 A useful property is that the output vector is a reverse
13768 topological sort of the resulting DAG. In our case that means
13769 dependent SCCs are found before their dependers. We make use of
13770 that property. */
13771
13772void
13773depset::tarjan::connect (depset *v)
13774{
13775 gcc_checking_assert (v->is_binding ()
13776 || !(v->is_unreached () || v->is_import ()));
13777
13778 v->cluster = v->section = ++index;
13779 stack.safe_push (obj: v);
13780
13781 /* Walk all our dependencies, ignore a first marked slot */
13782 for (unsigned ix = v->is_special (); ix != v->deps.length (); ix++)
13783 {
13784 depset *dep = v->deps[ix];
13785
13786 if (dep->is_binding () || !dep->is_import ())
13787 {
13788 unsigned lwm = dep->cluster;
13789
13790 if (!dep->cluster)
13791 {
13792 /* A new node. Connect it. */
13793 connect (v: dep);
13794 lwm = dep->section;
13795 }
13796
13797 if (dep->section && v->section > lwm)
13798 v->section = lwm;
13799 }
13800 }
13801
13802 if (v->section == v->cluster)
13803 {
13804 /* Root of a new SCC. Push all the members onto the result list. */
13805 unsigned num = v->cluster;
13806 depset *p;
13807 do
13808 {
13809 p = stack.pop ();
13810 p->cluster = num;
13811 p->section = 0;
13812 result.quick_push (obj: p);
13813 }
13814 while (p != v);
13815 }
13816}
13817
13818/* Compare two depsets. The specific ordering is unimportant, we're
13819 just trying to get consistency. */
13820
13821static int
13822depset_cmp (const void *a_, const void *b_)
13823{
13824 depset *a = *(depset *const *)a_;
13825 depset *b = *(depset *const *)b_;
13826
13827 depset::entity_kind a_kind = a->get_entity_kind ();
13828 depset::entity_kind b_kind = b->get_entity_kind ();
13829
13830 if (a_kind != b_kind)
13831 /* Different entity kinds, order by that. */
13832 return a_kind < b_kind ? -1 : +1;
13833
13834 tree a_decl = a->get_entity ();
13835 tree b_decl = b->get_entity ();
13836 if (a_kind == depset::EK_USING)
13837 {
13838 /* If one is a using, the other must be too. */
13839 a_decl = OVL_FUNCTION (a_decl);
13840 b_decl = OVL_FUNCTION (b_decl);
13841 }
13842
13843 if (a_decl != b_decl)
13844 /* Different entities, order by their UID. */
13845 return DECL_UID (a_decl) < DECL_UID (b_decl) ? -1 : +1;
13846
13847 if (a_kind == depset::EK_BINDING)
13848 {
13849 /* Both are bindings. Order by identifier hash. */
13850 gcc_checking_assert (a->get_name () != b->get_name ());
13851 hashval_t ah = IDENTIFIER_HASH_VALUE (a->get_name ());
13852 hashval_t bh = IDENTIFIER_HASH_VALUE (b->get_name ());
13853 return (ah == bh ? 0 : ah < bh ? -1 : +1);
13854 }
13855
13856 /* They are the same decl. This can happen with two using decls
13857 pointing to the same target. The best we can aim for is
13858 consistently telling qsort how to order them. Hopefully we'll
13859 never have to debug a case that depends on this. Oh, who am I
13860 kidding? Good luck. */
13861 gcc_checking_assert (a_kind == depset::EK_USING);
13862
13863 /* Order by depset address. Not the best, but it is something. */
13864 return a < b ? -1 : +1;
13865}
13866
13867/* Sort the clusters in SCC such that those that depend on one another
13868 are placed later. */
13869
13870// FIXME: I am not convinced this is needed and, if needed,
13871// sufficient. We emit the decls in this order but that emission
13872// could walk into later decls (from the body of the decl, or default
13873// arg-like things). Why doesn't that walk do the right thing? And
13874// if it DTRT why do we need to sort here -- won't things naturally
13875// work? I think part of the issue is that when we're going to refer
13876// to an entity by name, and that entity is in the same cluster as us,
13877// we need to actually walk that entity, if we've not already walked
13878// it.
13879static void
13880sort_cluster (depset::hash *original, depset *scc[], unsigned size)
13881{
13882 depset::hash table (size, original);
13883
13884 dump.indent ();
13885
13886 /* Place bindings last, usings before that. It's not strictly
13887 necessary, but it does make things neater. Says Mr OCD. */
13888 unsigned bind_lwm = size;
13889 unsigned use_lwm = size;
13890 for (unsigned ix = 0; ix != use_lwm;)
13891 {
13892 depset *dep = scc[ix];
13893 switch (dep->get_entity_kind ())
13894 {
13895 case depset::EK_BINDING:
13896 /* Move to end. No increment. Notice this could be moving
13897 a using decl, which we'll then move again. */
13898 if (--bind_lwm != ix)
13899 {
13900 scc[ix] = scc[bind_lwm];
13901 scc[bind_lwm] = dep;
13902 }
13903 if (use_lwm > bind_lwm)
13904 {
13905 use_lwm--;
13906 break;
13907 }
13908 /* We must have copied a using, so move it too. */
13909 dep = scc[ix];
13910 gcc_checking_assert (dep->get_entity_kind () == depset::EK_USING);
13911 /* FALLTHROUGH */
13912
13913 case depset::EK_USING:
13914 if (--use_lwm != ix)
13915 {
13916 scc[ix] = scc[use_lwm];
13917 scc[use_lwm] = dep;
13918 }
13919 break;
13920
13921 case depset::EK_DECL:
13922 case depset::EK_SPECIALIZATION:
13923 case depset::EK_PARTIAL:
13924 table.add_mergeable (mergeable: dep);
13925 ix++;
13926 break;
13927
13928 default:
13929 gcc_unreachable ();
13930 }
13931 }
13932
13933 gcc_checking_assert (use_lwm <= bind_lwm);
13934 dump (dumper::MERGE) && dump ("Ordering %u/%u depsets", use_lwm, size);
13935
13936 table.find_dependencies (module: nullptr);
13937
13938 vec<depset *> order = table.connect ();
13939 gcc_checking_assert (order.length () == use_lwm);
13940
13941 /* Now rewrite entries [0,lwm), in the dependency order we
13942 discovered. Usually each entity is in its own cluster. Rarely,
13943 we can get multi-entity clusters, in which case all but one must
13944 only be reached from within the cluster. This happens for
13945 something like:
13946
13947 template<typename T>
13948 auto Foo (const T &arg) -> TPL<decltype (arg)>;
13949
13950 The instantiation of TPL will be in the specialization table, and
13951 refer to Foo via arg. But we can only get to that specialization
13952 from Foo's declaration, so we only need to treat Foo as mergable
13953 (We'll do structural comparison of TPL<decltype (arg)>).
13954
13955 Finding the single cluster entry dep is very tricky and
13956 expensive. Let's just not do that. It's harmless in this case
13957 anyway. */
13958 unsigned pos = 0;
13959 unsigned cluster = ~0u;
13960 for (unsigned ix = 0; ix != order.length (); ix++)
13961 {
13962 gcc_checking_assert (order[ix]->is_special ());
13963 depset *dep = order[ix]->deps[0];
13964 scc[pos++] = dep;
13965 dump (dumper::MERGE)
13966 && dump ("Mergeable %u is %N%s", ix, dep->get_entity (),
13967 order[ix]->cluster == cluster ? " (tight)" : "");
13968 cluster = order[ix]->cluster;
13969 }
13970
13971 gcc_checking_assert (pos == use_lwm);
13972
13973 order.release ();
13974 dump (dumper::MERGE) && dump ("Ordered %u keys", pos);
13975 dump.outdent ();
13976}
13977
13978/* Reduce graph to SCCS clusters. SCCS will be populated with the
13979 depsets in dependency order. Each depset's CLUSTER field contains
13980 its cluster number. Each SCC has a unique cluster number, and are
13981 contiguous in SCCS. Cluster numbers are otherwise arbitrary. */
13982
13983vec<depset *>
13984depset::hash::connect ()
13985{
13986 tarjan connector (size ());
13987 vec<depset *> deps;
13988 deps.create (nelems: size ());
13989 iterator end (this->end ());
13990 for (iterator iter (begin ()); iter != end; ++iter)
13991 {
13992 depset *item = *iter;
13993
13994 entity_kind kind = item->get_entity_kind ();
13995 if (kind == EK_BINDING
13996 || !(kind == EK_REDIRECT
13997 || item->is_unreached ()
13998 || item->is_import ()))
13999 deps.quick_push (obj: item);
14000 }
14001
14002 /* Iteration over the hash table is an unspecified ordering. While
14003 that has advantages, it causes 2 problems. Firstly repeatable
14004 builds are tricky. Secondly creating testcases that check
14005 dependencies are correct by making sure a bad ordering would
14006 happen if that was wrong. */
14007 deps.qsort (depset_cmp);
14008
14009 while (deps.length ())
14010 {
14011 depset *v = deps.pop ();
14012 dump (dumper::CLUSTER) &&
14013 (v->is_binding ()
14014 ? dump ("Connecting binding %P", v->get_entity (), v->get_name ())
14015 : dump ("Connecting %s %s %C:%N",
14016 is_key_order () ? "key-order"
14017 : !v->has_defn () ? "declaration" : "definition",
14018 v->entity_kind_name (), TREE_CODE (v->get_entity ()),
14019 v->get_entity ()));
14020 if (!v->cluster)
14021 connector.connect (v);
14022 }
14023
14024 deps.release ();
14025 return connector.result;
14026}
14027
14028/* Initialize location spans. */
14029
14030void
14031loc_spans::init (const line_maps *lmaps, const line_map_ordinary *map)
14032{
14033 gcc_checking_assert (!init_p ());
14034 spans = new vec<span> ();
14035 spans->reserve (nelems: 20);
14036
14037 span interval;
14038 interval.ordinary.first = 0;
14039 interval.macro.second = MAX_LOCATION_T + 1;
14040 interval.ordinary_delta = interval.macro_delta = 0;
14041
14042 /* A span for reserved fixed locs. */
14043 interval.ordinary.second
14044 = MAP_START_LOCATION (map: LINEMAPS_ORDINARY_MAP_AT (set: line_table, index: 0));
14045 interval.macro.first = interval.macro.second;
14046 dump (dumper::LOCATION)
14047 && dump ("Fixed span %u ordinary:[%u,%u) macro:[%u,%u)", spans->length (),
14048 interval.ordinary.first, interval.ordinary.second,
14049 interval.macro.first, interval.macro.second);
14050 spans->quick_push (obj: interval);
14051
14052 /* A span for command line & forced headers. */
14053 interval.ordinary.first = interval.ordinary.second;
14054 interval.macro.second = interval.macro.first;
14055 if (map)
14056 {
14057 interval.ordinary.second = map->start_location;
14058 interval.macro.first = LINEMAPS_MACRO_LOWEST_LOCATION (set: lmaps);
14059 }
14060 dump (dumper::LOCATION)
14061 && dump ("Pre span %u ordinary:[%u,%u) macro:[%u,%u)", spans->length (),
14062 interval.ordinary.first, interval.ordinary.second,
14063 interval.macro.first, interval.macro.second);
14064 spans->quick_push (obj: interval);
14065
14066 /* Start an interval for the main file. */
14067 interval.ordinary.first = interval.ordinary.second;
14068 interval.macro.second = interval.macro.first;
14069 dump (dumper::LOCATION)
14070 && dump ("Main span %u ordinary:[%u,*) macro:[*,%u)", spans->length (),
14071 interval.ordinary.first, interval.macro.second);
14072 spans->quick_push (obj: interval);
14073}
14074
14075/* Reopen the span, if we want the about-to-be-inserted set of maps to
14076 be propagated in our own location table. I.e. we are the primary
14077 interface and we're importing a partition. */
14078
14079bool
14080loc_spans::maybe_propagate (module_state *import, location_t hwm)
14081{
14082 bool opened = (module_interface_p () && !module_partition_p ()
14083 && import->is_partition ());
14084 if (opened)
14085 open (hwm);
14086 return opened;
14087}
14088
14089/* Open a new linemap interval. The just-created ordinary map is the
14090 first map of the interval. */
14091
14092void
14093loc_spans::open (location_t hwm)
14094{
14095 span interval;
14096 interval.ordinary.first = interval.ordinary.second = hwm;
14097 interval.macro.first = interval.macro.second
14098 = LINEMAPS_MACRO_LOWEST_LOCATION (set: line_table);
14099 interval.ordinary_delta = interval.macro_delta = 0;
14100 dump (dumper::LOCATION)
14101 && dump ("Opening span %u ordinary:[%u,... macro:...,%u)",
14102 spans->length (), interval.ordinary.first,
14103 interval.macro.second);
14104 if (spans->length ())
14105 {
14106 /* No overlapping! */
14107 auto &last = spans->last ();
14108 gcc_checking_assert (interval.ordinary.first >= last.ordinary.second);
14109 gcc_checking_assert (interval.macro.second <= last.macro.first);
14110 }
14111 spans->safe_push (obj: interval);
14112}
14113
14114/* Close out the current linemap interval. The last maps are within
14115 the interval. */
14116
14117void
14118loc_spans::close ()
14119{
14120 span &interval = spans->last ();
14121
14122 interval.ordinary.second
14123 = ((line_table->highest_location + (1 << line_table->default_range_bits))
14124 & ~((1u << line_table->default_range_bits) - 1));
14125 interval.macro.first = LINEMAPS_MACRO_LOWEST_LOCATION (set: line_table);
14126 dump (dumper::LOCATION)
14127 && dump ("Closing span %u ordinary:[%u,%u) macro:[%u,%u)",
14128 spans->length () - 1,
14129 interval.ordinary.first,interval.ordinary.second,
14130 interval.macro.first, interval.macro.second);
14131}
14132
14133/* Given an ordinary location LOC, return the lmap_interval it resides
14134 in. NULL if it is not in an interval. */
14135
14136const loc_spans::span *
14137loc_spans::ordinary (location_t loc)
14138{
14139 unsigned len = spans->length ();
14140 unsigned pos = 0;
14141 while (len)
14142 {
14143 unsigned half = len / 2;
14144 const span &probe = (*spans)[pos + half];
14145 if (loc < probe.ordinary.first)
14146 len = half;
14147 else if (loc < probe.ordinary.second)
14148 return &probe;
14149 else
14150 {
14151 pos += half + 1;
14152 len = len - (half + 1);
14153 }
14154 }
14155 return NULL;
14156}
14157
14158/* Likewise, given a macro location LOC, return the lmap interval it
14159 resides in. */
14160
14161const loc_spans::span *
14162loc_spans::macro (location_t loc)
14163{
14164 unsigned len = spans->length ();
14165 unsigned pos = 0;
14166 while (len)
14167 {
14168 unsigned half = len / 2;
14169 const span &probe = (*spans)[pos + half];
14170 if (loc >= probe.macro.second)
14171 len = half;
14172 else if (loc >= probe.macro.first)
14173 return &probe;
14174 else
14175 {
14176 pos += half + 1;
14177 len = len - (half + 1);
14178 }
14179 }
14180 return NULL;
14181}
14182
14183/* Return the ordinary location closest to FROM. */
14184
14185static location_t
14186ordinary_loc_of (line_maps *lmaps, location_t from)
14187{
14188 while (!IS_ORDINARY_LOC (loc: from))
14189 {
14190 if (IS_ADHOC_LOC (loc: from))
14191 from = get_location_from_adhoc_loc (lmaps, from);
14192 if (from >= LINEMAPS_MACRO_LOWEST_LOCATION (set: lmaps))
14193 {
14194 /* Find the ordinary location nearest FROM. */
14195 const line_map *map = linemap_lookup (lmaps, from);
14196 const line_map_macro *mac_map = linemap_check_macro (map);
14197 from = mac_map->get_expansion_point_location ();
14198 }
14199 }
14200 return from;
14201}
14202
14203static module_state **
14204get_module_slot (tree name, module_state *parent, bool partition, bool insert)
14205{
14206 module_state_hash::compare_type ct (name, uintptr_t (parent) | partition);
14207 hashval_t hv = module_state_hash::hash (c: ct);
14208
14209 return modules_hash->find_slot_with_hash (comparable: ct, hash: hv, insert: insert ? INSERT : NO_INSERT);
14210}
14211
14212static module_state *
14213get_primary (module_state *parent)
14214{
14215 while (parent->is_partition ())
14216 parent = parent->parent;
14217
14218 if (!parent->name)
14219 // Implementation unit has null name
14220 parent = parent->parent;
14221
14222 return parent;
14223}
14224
14225/* Find or create module NAME & PARENT in the hash table. */
14226
14227module_state *
14228get_module (tree name, module_state *parent, bool partition)
14229{
14230 /* We might be given an empty NAME if preprocessing fails to handle
14231 a header-name token. */
14232 if (name && TREE_CODE (name) == STRING_CST
14233 && TREE_STRING_LENGTH (name) == 0)
14234 return nullptr;
14235
14236 if (partition)
14237 {
14238 if (!parent)
14239 parent = get_primary (parent: (*modules)[0]);
14240
14241 if (!parent->is_partition () && !parent->flatname)
14242 parent->set_flatname ();
14243 }
14244
14245 module_state **slot = get_module_slot (name, parent, partition, insert: true);
14246 module_state *state = *slot;
14247 if (!state)
14248 {
14249 state = (new (ggc_alloc<module_state> ())
14250 module_state (name, parent, partition));
14251 *slot = state;
14252 }
14253 return state;
14254}
14255
14256/* Process string name PTR into a module_state. */
14257
14258static module_state *
14259get_module (const char *ptr)
14260{
14261 /* On DOS based file systems, there is an ambiguity with A:B which can be
14262 interpreted as a module Module:Partition or Drive:PATH. Interpret strings
14263 which clearly starts as pathnames as header-names and everything else is
14264 treated as a (possibly malformed) named moduled. */
14265 if (IS_DIR_SEPARATOR (ptr[ptr[0] == '.']) // ./FOO or /FOO
14266#if HAVE_DOS_BASED_FILE_SYSTEM
14267 || (HAS_DRIVE_SPEC (ptr) && IS_DIR_SEPARATOR (ptr[2])) // A:/FOO
14268#endif
14269 || false)
14270 /* A header name. */
14271 return get_module (name: build_string (strlen (s: ptr), ptr));
14272
14273 bool partition = false;
14274 module_state *mod = NULL;
14275
14276 for (const char *probe = ptr;; probe++)
14277 if (!*probe || *probe == '.' || *probe == ':')
14278 {
14279 if (probe == ptr)
14280 return NULL;
14281
14282 mod = get_module (name: get_identifier_with_length (ptr, probe - ptr),
14283 parent: mod, partition);
14284 ptr = probe;
14285 if (*ptr == ':')
14286 {
14287 if (partition)
14288 return NULL;
14289 partition = true;
14290 }
14291
14292 if (!*ptr++)
14293 break;
14294 }
14295 else if (!(ISALPHA (*probe) || *probe == '_'
14296 || (probe != ptr && ISDIGIT (*probe))))
14297 return NULL;
14298
14299 return mod;
14300}
14301
14302/* Create a new mapper connecting to OPTION. */
14303
14304module_client *
14305make_mapper (location_t loc, class mkdeps *deps)
14306{
14307 timevar_start (TV_MODULE_MAPPER);
14308 const char *option = module_mapper_name;
14309 if (!option)
14310 option = getenv (name: "CXX_MODULE_MAPPER");
14311
14312 mapper = module_client::open_module_client
14313 (loc, option, deps, set_repo: &set_cmi_repo,
14314 (save_decoded_options[0].opt_index == OPT_SPECIAL_program_name)
14315 && save_decoded_options[0].arg != progname
14316 ? save_decoded_options[0].arg : nullptr);
14317
14318 timevar_stop (TV_MODULE_MAPPER);
14319
14320 return mapper;
14321}
14322
14323static unsigned lazy_snum;
14324
14325static bool
14326recursive_lazy (unsigned snum = ~0u)
14327{
14328 if (lazy_snum)
14329 {
14330 error_at (input_location, "recursive lazy load");
14331 return true;
14332 }
14333
14334 lazy_snum = snum;
14335 return false;
14336}
14337
14338/* If THIS is the current purview, issue an import error and return false. */
14339
14340bool
14341module_state::check_not_purview (location_t from)
14342{
14343 module_state *imp = (*modules)[0];
14344 if (imp && !imp->name)
14345 imp = imp->parent;
14346 if (imp == this)
14347 {
14348 /* Cannot import the current module. */
14349 error_at (from, "cannot import module in its own purview");
14350 inform (loc, "module %qs declared here", get_flatname ());
14351 return false;
14352 }
14353 return true;
14354}
14355
14356/* Module name substitutions. */
14357static vec<module_state *,va_heap> substs;
14358
14359void
14360module_state::mangle (bool include_partition)
14361{
14362 if (subst)
14363 mangle_module_substitution (subst);
14364 else
14365 {
14366 if (parent)
14367 parent->mangle (include_partition);
14368 if (include_partition || !is_partition ())
14369 {
14370 // Partitions are significant for global initializer
14371 // functions
14372 bool partition = is_partition () && !parent->is_partition ();
14373 subst = mangle_module_component (id: name, partition);
14374 substs.safe_push (obj: this);
14375 }
14376 }
14377}
14378
14379void
14380mangle_module (int mod, bool include_partition)
14381{
14382 module_state *imp = (*modules)[mod];
14383
14384 gcc_checking_assert (!imp->is_header ());
14385
14386 if (!imp->name)
14387 /* Set when importing the primary module interface. */
14388 imp = imp->parent;
14389
14390 imp->mangle (include_partition);
14391}
14392
14393/* Clean up substitutions. */
14394void
14395mangle_module_fini ()
14396{
14397 while (substs.length ())
14398 substs.pop ()->subst = 0;
14399}
14400
14401/* Announce WHAT about the module. */
14402
14403void
14404module_state::announce (const char *what) const
14405{
14406 if (noisy_p ())
14407 {
14408 fprintf (stderr, format: " %s:%s", what, get_flatname ());
14409 fflush (stderr);
14410 }
14411}
14412
14413/* A human-readable README section. The contents of this section to
14414 not contribute to the CRC, so the contents can change per
14415 compilation. That allows us to embed CWD, hostname, build time and
14416 what not. It is a STRTAB that may be extracted with:
14417 readelf -pgnu.c++.README $(module).gcm */
14418
14419void
14420module_state::write_readme (elf_out *to, cpp_reader *reader, const char *dialect)
14421{
14422 bytes_out readme (to);
14423
14424 readme.begin (need_crc: false);
14425
14426 readme.printf (format: "GNU C++ %s",
14427 is_header () ? "header unit"
14428 : !is_partition () ? "primary interface"
14429 : is_interface () ? "interface partition"
14430 : "internal partition");
14431
14432 /* Compiler's version. */
14433 readme.printf (format: "compiler: %s", version_string);
14434
14435 /* Module format version. */
14436 verstr_t string;
14437 version2string (MODULE_VERSION, out&: string);
14438 readme.printf (format: "version: %s", string);
14439
14440 /* Module information. */
14441 readme.printf (format: "module: %s", get_flatname ());
14442 readme.printf (format: "source: %s", main_input_filename);
14443 readme.printf (format: "dialect: %s", dialect);
14444 if (extensions)
14445 readme.printf (format: "extensions: %s",
14446 extensions & SE_OPENMP ? "-fopenmp" : "");
14447
14448 /* The following fields could be expected to change between
14449 otherwise identical compilations. Consider a distributed build
14450 system. We should have a way of overriding that. */
14451 if (char *cwd = getcwd (NULL, size: 0))
14452 {
14453 readme.printf (format: "cwd: %s", cwd);
14454 free (ptr: cwd);
14455 }
14456 readme.printf (format: "repository: %s", cmi_repo ? cmi_repo : ".");
14457#if NETWORKING
14458 {
14459 char hostname[64];
14460 if (!gethostname (hostname, sizeof (hostname)))
14461 readme.printf ("host: %s", hostname);
14462 }
14463#endif
14464 {
14465 /* This of course will change! */
14466 time_t stampy;
14467 auto kind = cpp_get_date (reader, &stampy);
14468 if (kind != CPP_time_kind::UNKNOWN)
14469 {
14470 struct tm *time;
14471
14472 time = gmtime (timer: &stampy);
14473 readme.print_time (kind: "build", time, tz: "UTC");
14474
14475 if (kind == CPP_time_kind::DYNAMIC)
14476 {
14477 time = localtime (timer: &stampy);
14478 readme.print_time (kind: "local", time,
14479#if defined (__USE_MISC) || defined (__USE_BSD) /* Is there a better way? */
14480 tz: time->tm_zone
14481#else
14482 ""
14483#endif
14484 );
14485 }
14486 }
14487 }
14488
14489 /* Its direct imports. */
14490 for (unsigned ix = 1; ix < modules->length (); ix++)
14491 {
14492 module_state *state = (*modules)[ix];
14493
14494 if (state->is_direct ())
14495 readme.printf (format: "%s: %s %s", state->exported_p ? "export" : "import",
14496 state->get_flatname (), state->filename);
14497 }
14498
14499 readme.end (sink: to, name: to->name (MOD_SNAME_PFX ".README"), NULL);
14500}
14501
14502/* Sort environment var names in reverse order. */
14503
14504static int
14505env_var_cmp (const void *a_, const void *b_)
14506{
14507 const unsigned char *a = *(const unsigned char *const *)a_;
14508 const unsigned char *b = *(const unsigned char *const *)b_;
14509
14510 for (unsigned ix = 0; ; ix++)
14511 {
14512 bool a_end = !a[ix] || a[ix] == '=';
14513 if (a[ix] == b[ix])
14514 {
14515 if (a_end)
14516 break;
14517 }
14518 else
14519 {
14520 bool b_end = !b[ix] || b[ix] == '=';
14521
14522 if (!a_end && !b_end)
14523 return a[ix] < b[ix] ? +1 : -1;
14524 if (a_end && b_end)
14525 break;
14526 return a_end ? +1 : -1;
14527 }
14528 }
14529
14530 return 0;
14531}
14532
14533/* Write the environment. It is a STRTAB that may be extracted with:
14534 readelf -pgnu.c++.ENV $(module).gcm */
14535
14536void
14537module_state::write_env (elf_out *to)
14538{
14539 vec<const char *> vars;
14540 vars.create (nelems: 20);
14541
14542 extern char **environ;
14543 while (const char *var = environ[vars.length ()])
14544 vars.safe_push (obj: var);
14545 vars.qsort (env_var_cmp);
14546
14547 bytes_out env (to);
14548 env.begin (need_crc: false);
14549 while (vars.length ())
14550 env.printf (format: "%s", vars.pop ());
14551 env.end (sink: to, name: to->name (MOD_SNAME_PFX ".ENV"), NULL);
14552
14553 vars.release ();
14554}
14555
14556/* Write the direct or indirect imports.
14557 u:N
14558 {
14559 u:index
14560 s:name
14561 u32:crc
14562 s:filename (direct)
14563 u:exported (direct)
14564 } imports[N]
14565 */
14566
14567void
14568module_state::write_imports (bytes_out &sec, bool direct)
14569{
14570 unsigned count = 0;
14571
14572 for (unsigned ix = 1; ix < modules->length (); ix++)
14573 {
14574 module_state *imp = (*modules)[ix];
14575
14576 if (imp->remap && imp->is_direct () == direct)
14577 count++;
14578 }
14579
14580 gcc_assert (!direct || count);
14581
14582 sec.u (v: count);
14583 for (unsigned ix = 1; ix < modules->length (); ix++)
14584 {
14585 module_state *imp = (*modules)[ix];
14586
14587 if (imp->remap && imp->is_direct () == direct)
14588 {
14589 dump () && dump ("Writing %simport:%u->%u %M (crc=%x)",
14590 !direct ? "indirect "
14591 : imp->exported_p ? "exported " : "",
14592 ix, imp->remap, imp, imp->crc);
14593 sec.u (v: imp->remap);
14594 sec.str (ptr: imp->get_flatname ());
14595 sec.u32 (val: imp->crc);
14596 if (direct)
14597 {
14598 write_location (sec, imp->imported_from ());
14599 sec.str (ptr: imp->filename);
14600 int exportedness = 0;
14601 if (imp->exported_p)
14602 exportedness = +1;
14603 else if (!imp->is_purview_direct ())
14604 exportedness = -1;
14605 sec.i (v: exportedness);
14606 }
14607 }
14608 }
14609}
14610
14611/* READER, LMAPS != NULL == direct imports,
14612 == NUL == indirect imports. */
14613
14614unsigned
14615module_state::read_imports (bytes_in &sec, cpp_reader *reader, line_maps *lmaps)
14616{
14617 unsigned count = sec.u ();
14618 unsigned loaded = 0;
14619
14620 while (count--)
14621 {
14622 unsigned ix = sec.u ();
14623 if (ix >= slurp->remap->length () || !ix || (*slurp->remap)[ix])
14624 {
14625 sec.set_overrun ();
14626 break;
14627 }
14628
14629 const char *name = sec.str (NULL);
14630 module_state *imp = get_module (ptr: name);
14631 unsigned crc = sec.u32 ();
14632 int exportedness = 0;
14633
14634 /* If the import is a partition, it must be the same primary
14635 module as this TU. */
14636 if (imp && imp->is_partition () &&
14637 (!named_module_p ()
14638 || (get_primary (parent: (*modules)[0]) != get_primary (parent: imp))))
14639 imp = NULL;
14640
14641 if (!imp)
14642 sec.set_overrun ();
14643 if (sec.get_overrun ())
14644 break;
14645
14646 if (lmaps)
14647 {
14648 /* A direct import, maybe load it. */
14649 location_t floc = read_location (sec);
14650 const char *fname = sec.str (NULL);
14651 exportedness = sec.i ();
14652
14653 if (sec.get_overrun ())
14654 break;
14655
14656 if (!imp->check_not_purview (from: loc))
14657 continue;
14658
14659 if (imp->loadedness == ML_NONE)
14660 {
14661 imp->loc = floc;
14662 imp->crc = crc;
14663 if (!imp->get_flatname ())
14664 imp->set_flatname ();
14665
14666 unsigned n = dump.push (m: imp);
14667
14668 if (!imp->filename && fname)
14669 imp->filename = xstrdup (fname);
14670
14671 if (imp->is_partition ())
14672 dump () && dump ("Importing elided partition %M", imp);
14673
14674 if (!imp->do_import (reader, outermost: false))
14675 imp = NULL;
14676 dump.pop (n);
14677 if (!imp)
14678 continue;
14679 }
14680
14681 if (is_partition ())
14682 {
14683 if (!imp->is_direct ())
14684 imp->directness = MD_PARTITION_DIRECT;
14685 if (exportedness > 0)
14686 imp->exported_p = true;
14687 }
14688 }
14689 else
14690 {
14691 /* An indirect import, find it, it should already be here. */
14692 if (imp->loadedness == ML_NONE)
14693 {
14694 error_at (loc, "indirect import %qs is not already loaded", name);
14695 continue;
14696 }
14697 }
14698
14699 if (imp->crc != crc)
14700 error_at (loc, "import %qs has CRC mismatch", imp->get_flatname ());
14701
14702 (*slurp->remap)[ix] = (imp->mod << 1) | (lmaps != NULL);
14703
14704 if (lmaps && exportedness >= 0)
14705 set_import (imp, is_export: bool (exportedness));
14706 dump () && dump ("Found %simport:%u %M->%u", !lmaps ? "indirect "
14707 : exportedness > 0 ? "exported "
14708 : exportedness < 0 ? "gmf" : "", ix, imp,
14709 imp->mod);
14710 loaded++;
14711 }
14712
14713 return loaded;
14714}
14715
14716/* Write the import table to MOD_SNAME_PFX.imp. */
14717
14718void
14719module_state::write_imports (elf_out *to, unsigned *crc_ptr)
14720{
14721 dump () && dump ("Writing imports");
14722 dump.indent ();
14723
14724 bytes_out sec (to);
14725 sec.begin ();
14726
14727 write_imports (sec, direct: true);
14728 write_imports (sec, direct: false);
14729
14730 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".imp"), crc_ptr);
14731 dump.outdent ();
14732}
14733
14734bool
14735module_state::read_imports (cpp_reader *reader, line_maps *lmaps)
14736{
14737 bytes_in sec;
14738
14739 if (!sec.begin (loc, source: from (), MOD_SNAME_PFX ".imp"))
14740 return false;
14741
14742 dump () && dump ("Reading %u imports", slurp->remap->length () - 1);
14743 dump.indent ();
14744
14745 /* Read the imports. */
14746 unsigned direct = read_imports (sec, reader, lmaps);
14747 unsigned indirect = read_imports (sec, NULL, NULL);
14748 if (direct + indirect + 1 != slurp->remap->length ())
14749 from ()->set_error (elf::E_BAD_IMPORT);
14750
14751 dump.outdent ();
14752 if (!sec.end (src: from ()))
14753 return false;
14754 return true;
14755}
14756
14757/* We're the primary module interface, but have partitions. Document
14758 them so that non-partition module implementation units know which
14759 have already been loaded. */
14760
14761void
14762module_state::write_partitions (elf_out *to, unsigned count, unsigned *crc_ptr)
14763{
14764 dump () && dump ("Writing %u elided partitions", count);
14765 dump.indent ();
14766
14767 bytes_out sec (to);
14768 sec.begin ();
14769
14770 for (unsigned ix = 1; ix != modules->length (); ix++)
14771 {
14772 module_state *imp = (*modules)[ix];
14773 if (imp->is_partition ())
14774 {
14775 dump () && dump ("Writing elided partition %M (crc=%x)",
14776 imp, imp->crc);
14777 sec.str (ptr: imp->get_flatname ());
14778 sec.u32 (val: imp->crc);
14779 write_location (sec, imp->is_direct ()
14780 ? imp->imported_from () : UNKNOWN_LOCATION);
14781 sec.str (ptr: imp->filename);
14782 }
14783 }
14784
14785 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".prt"), crc_ptr);
14786 dump.outdent ();
14787}
14788
14789bool
14790module_state::read_partitions (unsigned count)
14791{
14792 bytes_in sec;
14793 if (!sec.begin (loc, source: from (), MOD_SNAME_PFX ".prt"))
14794 return false;
14795
14796 dump () && dump ("Reading %u elided partitions", count);
14797 dump.indent ();
14798
14799 while (count--)
14800 {
14801 const char *name = sec.str (NULL);
14802 unsigned crc = sec.u32 ();
14803 location_t floc = read_location (sec);
14804 const char *fname = sec.str (NULL);
14805
14806 if (sec.get_overrun ())
14807 break;
14808
14809 dump () && dump ("Reading elided partition %s (crc=%x)", name, crc);
14810
14811 module_state *imp = get_module (ptr: name);
14812 if (!imp /* Partition should be ... */
14813 || !imp->is_partition () /* a partition ... */
14814 || imp->loadedness != ML_NONE /* that is not yet loaded ... */
14815 || get_primary (parent: imp) != this) /* whose primary is this. */
14816 {
14817 sec.set_overrun ();
14818 break;
14819 }
14820
14821 if (!imp->has_location ())
14822 imp->loc = floc;
14823 imp->crc = crc;
14824 if (!imp->filename && fname[0])
14825 imp->filename = xstrdup (fname);
14826 }
14827
14828 dump.outdent ();
14829 if (!sec.end (src: from ()))
14830 return false;
14831 return true;
14832}
14833
14834/* Data for config reading and writing. */
14835struct module_state_config {
14836 const char *dialect_str;
14837 unsigned num_imports;
14838 unsigned num_partitions;
14839 unsigned num_entities;
14840 unsigned ordinary_locs;
14841 unsigned macro_locs;
14842 unsigned loc_range_bits;
14843 unsigned active_init;
14844
14845public:
14846 module_state_config ()
14847 :dialect_str (get_dialect ()),
14848 num_imports (0), num_partitions (0), num_entities (0),
14849 ordinary_locs (0), macro_locs (0), loc_range_bits (0),
14850 active_init (0)
14851 {
14852 }
14853
14854 static void release ()
14855 {
14856 XDELETEVEC (dialect);
14857 dialect = NULL;
14858 }
14859
14860private:
14861 static const char *get_dialect ();
14862 static char *dialect;
14863};
14864
14865char *module_state_config::dialect;
14866
14867/* Generate a string of the significant compilation options.
14868 Generally assume the user knows what they're doing, in the same way
14869 that object files can be mixed. */
14870
14871const char *
14872module_state_config::get_dialect ()
14873{
14874 if (!dialect)
14875 dialect = concat (get_cxx_dialect_name (dialect: cxx_dialect),
14876 /* C++ implies these, only show if disabled. */
14877 flag_exceptions ? "" : "/no-exceptions",
14878 flag_rtti ? "" : "/no-rtti",
14879 flag_new_inheriting_ctors ? "" : "/old-inheriting-ctors",
14880 /* C++ 20 implies concepts. */
14881 cxx_dialect < cxx20 && flag_concepts ? "/concepts" : "",
14882 flag_coroutines ? "/coroutines" : "",
14883 flag_module_implicit_inline ? "/implicit-inline" : "",
14884 flag_contracts ? "/contracts" : "",
14885 NULL);
14886
14887 return dialect;
14888}
14889
14890/* Contents of a cluster. */
14891enum cluster_tag {
14892 ct_decl, /* A decl. */
14893 ct_defn, /* A definition. */
14894 ct_bind, /* A binding. */
14895 ct_hwm
14896};
14897
14898/* Binding modifiers. */
14899enum ct_bind_flags
14900{
14901 cbf_export = 0x1, /* An exported decl. */
14902 cbf_hidden = 0x2, /* A hidden (friend) decl. */
14903 cbf_using = 0x4, /* A using decl. */
14904 cbf_wrapped = 0x8, /* ... that is wrapped. */
14905};
14906
14907/* DEP belongs to a different cluster, seed it to prevent
14908 unfortunately timed duplicate import. */
14909// FIXME: QOI For inter-cluster references we could just only pick
14910// one entity from an earlier cluster. Even better track
14911// dependencies between earlier clusters
14912
14913void
14914module_state::intercluster_seed (trees_out &sec, unsigned index_hwm, depset *dep)
14915{
14916 if (dep->is_import ()
14917 || dep->cluster < index_hwm)
14918 {
14919 tree ent = dep->get_entity ();
14920 if (!TREE_VISITED (ent))
14921 {
14922 sec.tree_node (t: ent);
14923 dump (dumper::CLUSTER)
14924 && dump ("Seeded %s %N",
14925 dep->is_import () ? "import" : "intercluster", ent);
14926 }
14927 }
14928}
14929
14930/* Write the cluster of depsets in SCC[0-SIZE).
14931 dep->section -> section number
14932 dep->cluster -> entity number
14933 */
14934
14935unsigned
14936module_state::write_cluster (elf_out *to, depset *scc[], unsigned size,
14937 depset::hash &table, unsigned *counts,
14938 unsigned *crc_ptr)
14939{
14940 dump () && dump ("Writing section:%u %u depsets", table.section, size);
14941 dump.indent ();
14942
14943 trees_out sec (to, this, table, table.section);
14944 sec.begin ();
14945 unsigned index_lwm = counts[MSC_entities];
14946
14947 /* Determine entity numbers, mark for writing. */
14948 dump (dumper::CLUSTER) && dump ("Cluster members:") && (dump.indent (), true);
14949 for (unsigned ix = 0; ix != size; ix++)
14950 {
14951 depset *b = scc[ix];
14952
14953 switch (b->get_entity_kind ())
14954 {
14955 default:
14956 gcc_unreachable ();
14957
14958 case depset::EK_BINDING:
14959 {
14960 dump (dumper::CLUSTER)
14961 && dump ("[%u]=%s %P", ix, b->entity_kind_name (),
14962 b->get_entity (), b->get_name ());
14963 depset *ns_dep = b->deps[0];
14964 gcc_checking_assert (ns_dep->get_entity_kind ()
14965 == depset::EK_NAMESPACE
14966 && ns_dep->get_entity () == b->get_entity ());
14967 for (unsigned jx = b->deps.length (); --jx;)
14968 {
14969 depset *dep = b->deps[jx];
14970 // We could be declaring something that is also a
14971 // (merged) import
14972 gcc_checking_assert (dep->is_import ()
14973 || TREE_VISITED (dep->get_entity ())
14974 || (dep->get_entity_kind ()
14975 == depset::EK_USING));
14976 }
14977 }
14978 break;
14979
14980 case depset::EK_DECL:
14981 case depset::EK_SPECIALIZATION:
14982 case depset::EK_PARTIAL:
14983 b->cluster = counts[MSC_entities]++;
14984 sec.mark_declaration (decl: b->get_entity (), do_defn: b->has_defn ());
14985 /* FALLTHROUGH */
14986
14987 case depset::EK_USING:
14988 gcc_checking_assert (!b->is_import ()
14989 && !b->is_unreached ());
14990 dump (dumper::CLUSTER)
14991 && dump ("[%u]=%s %s %N", ix, b->entity_kind_name (),
14992 b->has_defn () ? "definition" : "declaration",
14993 b->get_entity ());
14994 break;
14995 }
14996 }
14997 dump (dumper::CLUSTER) && (dump.outdent (), true);
14998
14999 /* Ensure every out-of-cluster decl is referenced before we start
15000 streaming. We must do both imports *and* earlier clusters,
15001 because the latter could reach into the former and cause a
15002 duplicate loop. */
15003 sec.set_importing (+1);
15004 for (unsigned ix = 0; ix != size; ix++)
15005 {
15006 depset *b = scc[ix];
15007 for (unsigned jx = b->is_special (); jx != b->deps.length (); jx++)
15008 {
15009 depset *dep = b->deps[jx];
15010
15011 if (dep->is_binding ())
15012 {
15013 for (unsigned ix = dep->deps.length (); --ix;)
15014 {
15015 depset *bind = dep->deps[ix];
15016 if (bind->get_entity_kind () == depset::EK_USING)
15017 bind = bind->deps[1];
15018
15019 intercluster_seed (sec, index_hwm: index_lwm, dep: bind);
15020 }
15021 /* Also check the namespace itself. */
15022 dep = dep->deps[0];
15023 }
15024
15025 intercluster_seed (sec, index_hwm: index_lwm, dep);
15026 }
15027 }
15028 sec.tree_node (NULL_TREE);
15029 /* We're done importing now. */
15030 sec.set_importing (-1);
15031
15032 /* Write non-definitions. */
15033 for (unsigned ix = 0; ix != size; ix++)
15034 {
15035 depset *b = scc[ix];
15036 tree decl = b->get_entity ();
15037 switch (b->get_entity_kind ())
15038 {
15039 default:
15040 gcc_unreachable ();
15041 break;
15042
15043 case depset::EK_BINDING:
15044 {
15045 gcc_assert (TREE_CODE (decl) == NAMESPACE_DECL);
15046 dump () && dump ("Depset:%u binding %C:%P", ix, TREE_CODE (decl),
15047 decl, b->get_name ());
15048 sec.u (v: ct_bind);
15049 sec.tree_node (t: decl);
15050 sec.tree_node (t: b->get_name ());
15051
15052 /* Write in reverse order, so reading will see the exports
15053 first, thus building the overload chain will be
15054 optimized. */
15055 for (unsigned jx = b->deps.length (); --jx;)
15056 {
15057 depset *dep = b->deps[jx];
15058 tree bound = dep->get_entity ();
15059 unsigned flags = 0;
15060 if (dep->get_entity_kind () == depset::EK_USING)
15061 {
15062 tree ovl = bound;
15063 bound = OVL_FUNCTION (bound);
15064 if (!(TREE_CODE (bound) == CONST_DECL
15065 && UNSCOPED_ENUM_P (TREE_TYPE (bound))
15066 && decl == TYPE_NAME (TREE_TYPE (bound))))
15067 {
15068 /* An unscope enumerator in its enumeration's
15069 scope is not a using. */
15070 flags |= cbf_using;
15071 if (OVL_USING_P (ovl))
15072 flags |= cbf_wrapped;
15073 }
15074 if (OVL_EXPORT_P (ovl))
15075 flags |= cbf_export;
15076 }
15077 else
15078 {
15079 /* An implicit typedef must be at one. */
15080 gcc_assert (!DECL_IMPLICIT_TYPEDEF_P (bound) || jx == 1);
15081 if (dep->is_hidden ())
15082 flags |= cbf_hidden;
15083 else if (DECL_MODULE_EXPORT_P (STRIP_TEMPLATE (bound)))
15084 flags |= cbf_export;
15085 }
15086
15087 gcc_checking_assert (DECL_P (bound));
15088
15089 sec.i (v: flags);
15090 sec.tree_node (t: bound);
15091 }
15092
15093 /* Terminate the list. */
15094 sec.i (v: -1);
15095 }
15096 break;
15097
15098 case depset::EK_USING:
15099 dump () && dump ("Depset:%u %s %C:%N", ix, b->entity_kind_name (),
15100 TREE_CODE (decl), decl);
15101 break;
15102
15103 case depset::EK_SPECIALIZATION:
15104 case depset::EK_PARTIAL:
15105 case depset::EK_DECL:
15106 dump () && dump ("Depset:%u %s entity:%u %C:%N", ix,
15107 b->entity_kind_name (), b->cluster,
15108 TREE_CODE (decl), decl);
15109
15110 sec.u (v: ct_decl);
15111 sec.tree_node (t: decl);
15112
15113 dump () && dump ("Wrote declaration entity:%u %C:%N",
15114 b->cluster, TREE_CODE (decl), decl);
15115 break;
15116 }
15117 }
15118
15119 depset *namer = NULL;
15120
15121 /* Write out definitions */
15122 for (unsigned ix = 0; ix != size; ix++)
15123 {
15124 depset *b = scc[ix];
15125 tree decl = b->get_entity ();
15126 switch (b->get_entity_kind ())
15127 {
15128 default:
15129 break;
15130
15131 case depset::EK_SPECIALIZATION:
15132 case depset::EK_PARTIAL:
15133 case depset::EK_DECL:
15134 if (!namer)
15135 namer = b;
15136
15137 if (b->has_defn ())
15138 {
15139 sec.u (v: ct_defn);
15140 sec.tree_node (t: decl);
15141 dump () && dump ("Writing definition %N", decl);
15142 sec.write_definition (decl);
15143
15144 if (!namer->has_defn ())
15145 namer = b;
15146 }
15147 break;
15148 }
15149 }
15150
15151 /* We don't find the section by name. Use depset's decl's name for
15152 human friendliness. */
15153 unsigned name = 0;
15154 tree naming_decl = NULL_TREE;
15155 if (namer)
15156 {
15157 naming_decl = namer->get_entity ();
15158 if (namer->get_entity_kind () == depset::EK_USING)
15159 /* This unfortunately names the section from the target of the
15160 using decl. But the name is only a guide, so Do Not Care. */
15161 naming_decl = OVL_FUNCTION (naming_decl);
15162 if (DECL_IMPLICIT_TYPEDEF_P (naming_decl))
15163 /* Lose any anonymousness. */
15164 naming_decl = TYPE_NAME (TREE_TYPE (naming_decl));
15165 name = to->qualified_name (decl: naming_decl, is_defn: namer->has_defn ());
15166 }
15167
15168 unsigned bytes = sec.pos;
15169 unsigned snum = sec.end (sink: to, name, crc_ptr);
15170
15171 for (unsigned ix = size; ix--;)
15172 gcc_checking_assert (scc[ix]->section == snum);
15173
15174 dump.outdent ();
15175 dump () && dump ("Wrote section:%u named-by:%N", table.section, naming_decl);
15176
15177 return bytes;
15178}
15179
15180/* Read a cluster from section SNUM. */
15181
15182bool
15183module_state::read_cluster (unsigned snum)
15184{
15185 trees_in sec (this);
15186
15187 if (!sec.begin (loc, source: from (), snum))
15188 return false;
15189
15190 dump () && dump ("Reading section:%u", snum);
15191 dump.indent ();
15192
15193 /* We care about structural equality. */
15194 comparing_dependent_aliases++;
15195
15196 /* First seed the imports. */
15197 while (tree import = sec.tree_node ())
15198 dump (dumper::CLUSTER) && dump ("Seeded import %N", import);
15199
15200 while (!sec.get_overrun () && sec.more_p ())
15201 {
15202 unsigned ct = sec.u ();
15203 switch (ct)
15204 {
15205 default:
15206 sec.set_overrun ();
15207 break;
15208
15209 case ct_bind:
15210 /* A set of namespace bindings. */
15211 {
15212 tree ns = sec.tree_node ();
15213 tree name = sec.tree_node ();
15214 tree decls = NULL_TREE;
15215 tree visible = NULL_TREE;
15216 tree type = NULL_TREE;
15217 bool dedup = false;
15218
15219 /* We rely on the bindings being in the reverse order of
15220 the resulting overload set. */
15221 for (;;)
15222 {
15223 int flags = sec.i ();
15224 if (flags < 0)
15225 break;
15226
15227 if ((flags & cbf_hidden)
15228 && (flags & (cbf_using | cbf_export)))
15229 sec.set_overrun ();
15230
15231 tree decl = sec.tree_node ();
15232 if (sec.get_overrun ())
15233 break;
15234
15235 if (decls && TREE_CODE (decl) == TYPE_DECL)
15236 {
15237 /* Stat hack. */
15238 if (type || !DECL_IMPLICIT_TYPEDEF_P (decl))
15239 sec.set_overrun ();
15240 type = decl;
15241 }
15242 else
15243 {
15244 if (decls
15245 || (flags & (cbf_hidden | cbf_wrapped))
15246 || DECL_FUNCTION_TEMPLATE_P (decl))
15247 {
15248 decls = ovl_make (fn: decl, next: decls);
15249 if (flags & cbf_using)
15250 {
15251 dedup = true;
15252 OVL_USING_P (decls) = true;
15253 if (flags & cbf_export)
15254 OVL_EXPORT_P (decls) = true;
15255 }
15256
15257 if (flags & cbf_hidden)
15258 OVL_HIDDEN_P (decls) = true;
15259 else if (dedup)
15260 OVL_DEDUP_P (decls) = true;
15261 }
15262 else
15263 decls = decl;
15264
15265 if (flags & cbf_export
15266 || (!(flags & cbf_hidden)
15267 && (is_module () || is_partition ())))
15268 visible = decls;
15269 }
15270 }
15271
15272 if (!decls)
15273 sec.set_overrun ();
15274
15275 if (sec.get_overrun ())
15276 break; /* Bail. */
15277
15278 dump () && dump ("Binding of %P", ns, name);
15279 if (!set_module_binding (ctx: ns, name, mod,
15280 mod_glob_flag: is_header () ? -1
15281 : is_module () || is_partition () ? 1
15282 : 0,
15283 value: decls, type, visible))
15284 sec.set_overrun ();
15285 }
15286 break;
15287
15288 case ct_decl:
15289 /* A decl. */
15290 {
15291 tree decl = sec.tree_node ();
15292 dump () && dump ("Read declaration of %N", decl);
15293 }
15294 break;
15295
15296 case ct_defn:
15297 {
15298 tree decl = sec.tree_node ();
15299 dump () && dump ("Reading definition of %N", decl);
15300 sec.read_definition (decl);
15301 }
15302 break;
15303 }
15304 }
15305
15306 /* When lazy loading is in effect, we can be in the middle of
15307 parsing or instantiating a function. Save it away.
15308 push_function_context does too much work. */
15309 tree old_cfd = current_function_decl;
15310 struct function *old_cfun = cfun;
15311 for (const post_process_data& pdata : sec.post_process ())
15312 {
15313 tree decl = pdata.decl;
15314
15315 bool abstract = false;
15316 if (TREE_CODE (decl) == TEMPLATE_DECL)
15317 {
15318 abstract = true;
15319 decl = DECL_TEMPLATE_RESULT (decl);
15320 }
15321
15322 current_function_decl = decl;
15323 allocate_struct_function (decl, abstract);
15324 cfun->language = ggc_cleared_alloc<language_function> ();
15325 cfun->language->base.x_stmt_tree.stmts_are_full_exprs_p = 1;
15326 cfun->function_start_locus = pdata.start_locus;
15327 cfun->function_end_locus = pdata.end_locus;
15328
15329 if (abstract)
15330 ;
15331 else if (DECL_ABSTRACT_P (decl))
15332 vec_safe_push (v&: post_load_decls, obj: decl);
15333 else
15334 {
15335 bool aggr = aggregate_value_p (DECL_RESULT (decl), decl);
15336#ifdef PCC_STATIC_STRUCT_RETURN
15337 cfun->returns_pcc_struct = aggr;
15338#endif
15339 cfun->returns_struct = aggr;
15340
15341 if (DECL_COMDAT (decl))
15342 // FIXME: Comdat grouping?
15343 comdat_linkage (decl);
15344 note_vague_linkage_fn (decl);
15345 cgraph_node::finalize_function (decl, true);
15346 }
15347
15348 }
15349 /* Look, function.cc's interface to cfun does too much for us, we
15350 just need to restore the old value. I do not want to go
15351 redesigning that API right now. */
15352#undef cfun
15353 cfun = old_cfun;
15354 current_function_decl = old_cfd;
15355 comparing_dependent_aliases--;
15356
15357 dump.outdent ();
15358 dump () && dump ("Read section:%u", snum);
15359
15360 loaded_clusters++;
15361
15362 if (!sec.end (src: from ()))
15363 return false;
15364
15365 return true;
15366}
15367
15368void
15369module_state::write_namespace (bytes_out &sec, depset *dep)
15370{
15371 unsigned ns_num = dep->cluster;
15372 unsigned ns_import = 0;
15373
15374 if (dep->is_import ())
15375 ns_import = dep->section;
15376 else if (dep->get_entity () != global_namespace)
15377 ns_num++;
15378
15379 sec.u (v: ns_import);
15380 sec.u (v: ns_num);
15381}
15382
15383tree
15384module_state::read_namespace (bytes_in &sec)
15385{
15386 unsigned ns_import = sec.u ();
15387 unsigned ns_num = sec.u ();
15388 tree ns = NULL_TREE;
15389
15390 if (ns_import || ns_num)
15391 {
15392 if (!ns_import)
15393 ns_num--;
15394
15395 if (unsigned origin = slurp->remap_module (owner: ns_import))
15396 {
15397 module_state *from = (*modules)[origin];
15398 if (ns_num < from->entity_num)
15399 {
15400 binding_slot &slot = (*entity_ary)[from->entity_lwm + ns_num];
15401
15402 if (!slot.is_lazy ())
15403 ns = slot;
15404 }
15405 }
15406 else
15407 sec.set_overrun ();
15408 }
15409 else
15410 ns = global_namespace;
15411
15412 return ns;
15413}
15414
15415/* SPACES is a sorted vector of namespaces. Write out the namespaces
15416 to MOD_SNAME_PFX.nms section. */
15417
15418void
15419module_state::write_namespaces (elf_out *to, vec<depset *> spaces,
15420 unsigned num, unsigned *crc_p)
15421{
15422 dump () && dump ("Writing namespaces");
15423 dump.indent ();
15424
15425 bytes_out sec (to);
15426 sec.begin ();
15427
15428 for (unsigned ix = 0; ix != num; ix++)
15429 {
15430 depset *b = spaces[ix];
15431 tree ns = b->get_entity ();
15432
15433 gcc_checking_assert (TREE_CODE (ns) == NAMESPACE_DECL);
15434 /* P1815 may have something to say about this. */
15435 gcc_checking_assert (TREE_PUBLIC (ns));
15436
15437 unsigned flags = 0;
15438 if (TREE_PUBLIC (ns))
15439 flags |= 1;
15440 if (DECL_NAMESPACE_INLINE_P (ns))
15441 flags |= 2;
15442 if (DECL_MODULE_PURVIEW_P (ns))
15443 flags |= 4;
15444 if (DECL_MODULE_EXPORT_P (ns))
15445 flags |= 8;
15446
15447 dump () && dump ("Writing namespace:%u %N%s%s%s%s",
15448 b->cluster, ns,
15449 flags & 1 ? ", public" : "",
15450 flags & 2 ? ", inline" : "",
15451 flags & 4 ? ", purview" : "",
15452 flags & 8 ? ", export" : "");
15453 sec.u (v: b->cluster);
15454 sec.u (v: to->name (DECL_NAME (ns)));
15455 write_namespace (sec, dep: b->deps[0]);
15456
15457 sec.u (v: flags);
15458 write_location (sec, DECL_SOURCE_LOCATION (ns));
15459
15460 if (DECL_NAMESPACE_INLINE_P (ns))
15461 {
15462 if (tree attr = lookup_attribute (attr_name: "abi_tag", DECL_ATTRIBUTES (ns)))
15463 {
15464 tree tags = TREE_VALUE (attr);
15465 sec.u (v: list_length (tags));
15466 for (tree tag = tags; tag; tag = TREE_CHAIN (tag))
15467 sec.str (TREE_STRING_POINTER (TREE_VALUE (tag)));
15468 }
15469 else
15470 sec.u (v: 0);
15471 }
15472 }
15473
15474 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".nms"), crc_ptr: crc_p);
15475 dump.outdent ();
15476}
15477
15478/* Read the namespace hierarchy from MOD_SNAME_PFX.namespace. Fill in
15479 SPACES from that data. */
15480
15481bool
15482module_state::read_namespaces (unsigned num)
15483{
15484 bytes_in sec;
15485
15486 if (!sec.begin (loc, source: from (), MOD_SNAME_PFX ".nms"))
15487 return false;
15488
15489 dump () && dump ("Reading namespaces");
15490 dump.indent ();
15491
15492 for (unsigned ix = 0; ix != num; ix++)
15493 {
15494 unsigned entity_index = sec.u ();
15495 unsigned name = sec.u ();
15496
15497 tree parent = read_namespace (sec);
15498
15499 /* See comment in write_namespace about why not bits. */
15500 unsigned flags = sec.u ();
15501 location_t src_loc = read_location (sec);
15502 unsigned tags_count = (flags & 2) ? sec.u () : 0;
15503
15504 if (entity_index >= entity_num
15505 || !parent
15506 || (flags & 0xc) == 0x8)
15507 sec.set_overrun ();
15508
15509 tree tags = NULL_TREE;
15510 while (tags_count--)
15511 {
15512 size_t len;
15513 const char *str = sec.str (len_p: &len);
15514 tags = tree_cons (NULL_TREE, build_string (len + 1, str), tags);
15515 tags = nreverse (tags);
15516 }
15517
15518 if (sec.get_overrun ())
15519 break;
15520
15521 tree id = name ? get_identifier (from ()->name (name)) : NULL_TREE;
15522
15523 dump () && dump ("Read namespace:%u %P%s%s%s%s",
15524 entity_index, parent, id,
15525 flags & 1 ? ", public" : "",
15526 flags & 2 ? ", inline" : "",
15527 flags & 4 ? ", purview" : "",
15528 flags & 8 ? ", export" : "");
15529 bool visible_p = ((flags & 8)
15530 || ((flags & 1)
15531 && (flags & 4)
15532 && (is_partition () || is_module ())));
15533 tree inner = add_imported_namespace (ctx: parent, name: id, src_loc, module: mod,
15534 inline_p: bool (flags & 2), visible_p);
15535 if (!inner)
15536 {
15537 sec.set_overrun ();
15538 break;
15539 }
15540
15541 if (is_partition ())
15542 {
15543 if (flags & 4)
15544 DECL_MODULE_PURVIEW_P (inner) = true;
15545 if (flags & 8)
15546 DECL_MODULE_EXPORT_P (inner) = true;
15547 }
15548
15549 if (tags)
15550 DECL_ATTRIBUTES (inner)
15551 = tree_cons (get_identifier ("abi_tag"), tags, DECL_ATTRIBUTES (inner));
15552
15553 /* Install the namespace. */
15554 (*entity_ary)[entity_lwm + entity_index] = inner;
15555 if (DECL_MODULE_IMPORT_P (inner))
15556 {
15557 bool existed;
15558 unsigned *slot = &entity_map->get_or_insert
15559 (DECL_UID (inner), existed: &existed);
15560 if (existed)
15561 /* If it existed, it should match. */
15562 gcc_checking_assert (inner == (*entity_ary)[*slot]);
15563 else
15564 *slot = entity_lwm + entity_index;
15565 }
15566 }
15567 dump.outdent ();
15568 if (!sec.end (src: from ()))
15569 return false;
15570 return true;
15571}
15572
15573/* Write the binding TABLE to MOD_SNAME_PFX.bnd */
15574
15575unsigned
15576module_state::write_bindings (elf_out *to, vec<depset *> sccs, unsigned *crc_p)
15577{
15578 dump () && dump ("Writing binding table");
15579 dump.indent ();
15580
15581 unsigned num = 0;
15582 bytes_out sec (to);
15583 sec.begin ();
15584
15585 for (unsigned ix = 0; ix != sccs.length (); ix++)
15586 {
15587 depset *b = sccs[ix];
15588 if (b->is_binding ())
15589 {
15590 tree ns = b->get_entity ();
15591 dump () && dump ("Bindings %P section:%u", ns, b->get_name (),
15592 b->section);
15593 sec.u (v: to->name (ident: b->get_name ()));
15594 write_namespace (sec, dep: b->deps[0]);
15595 sec.u (v: b->section);
15596 num++;
15597 }
15598 }
15599
15600 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".bnd"), crc_ptr: crc_p);
15601 dump.outdent ();
15602
15603 return num;
15604}
15605
15606/* Read the binding table from MOD_SNAME_PFX.bind. */
15607
15608bool
15609module_state::read_bindings (unsigned num, unsigned lwm, unsigned hwm)
15610{
15611 bytes_in sec;
15612
15613 if (!sec.begin (loc, source: from (), MOD_SNAME_PFX ".bnd"))
15614 return false;
15615
15616 dump () && dump ("Reading binding table");
15617 dump.indent ();
15618 for (; !sec.get_overrun () && num--;)
15619 {
15620 const char *name = from ()->name (offset: sec.u ());
15621 tree ns = read_namespace (sec);
15622 unsigned snum = sec.u ();
15623
15624 if (!ns || !name || (snum - lwm) >= (hwm - lwm))
15625 sec.set_overrun ();
15626 if (!sec.get_overrun ())
15627 {
15628 tree id = get_identifier (name);
15629 dump () && dump ("Bindings %P section:%u", ns, id, snum);
15630 if (mod && !import_module_binding (ctx: ns, name: id, mod, snum))
15631 break;
15632 }
15633 }
15634
15635 dump.outdent ();
15636 if (!sec.end (src: from ()))
15637 return false;
15638 return true;
15639}
15640
15641/* Write the entity table to MOD_SNAME_PFX.ent
15642
15643 Each entry is a section number. */
15644
15645void
15646module_state::write_entities (elf_out *to, vec<depset *> depsets,
15647 unsigned count, unsigned *crc_p)
15648{
15649 dump () && dump ("Writing entities");
15650 dump.indent ();
15651
15652 bytes_out sec (to);
15653 sec.begin ();
15654
15655 unsigned current = 0;
15656 for (unsigned ix = 0; ix < depsets.length (); ix++)
15657 {
15658 depset *d = depsets[ix];
15659
15660 switch (d->get_entity_kind ())
15661 {
15662 default:
15663 break;
15664
15665 case depset::EK_NAMESPACE:
15666 if (!d->is_import () && d->get_entity () != global_namespace)
15667 {
15668 gcc_checking_assert (d->cluster == current);
15669 current++;
15670 sec.u (v: 0);
15671 }
15672 break;
15673
15674 case depset::EK_DECL:
15675 case depset::EK_SPECIALIZATION:
15676 case depset::EK_PARTIAL:
15677 gcc_checking_assert (!d->is_unreached ()
15678 && !d->is_import ()
15679 && d->cluster == current
15680 && d->section);
15681 current++;
15682 sec.u (v: d->section);
15683 break;
15684 }
15685 }
15686 gcc_assert (count == current);
15687 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".ent"), crc_ptr: crc_p);
15688 dump.outdent ();
15689}
15690
15691bool
15692module_state::read_entities (unsigned count, unsigned lwm, unsigned hwm)
15693{
15694 trees_in sec (this);
15695
15696 if (!sec.begin (loc, source: from (), MOD_SNAME_PFX ".ent"))
15697 return false;
15698
15699 dump () && dump ("Reading entities");
15700 dump.indent ();
15701
15702 for (binding_slot *slot = entity_ary->begin () + entity_lwm; count--; slot++)
15703 {
15704 unsigned snum = sec.u ();
15705 if (snum && (snum - lwm) >= (hwm - lwm))
15706 sec.set_overrun ();
15707 if (sec.get_overrun ())
15708 break;
15709
15710 if (snum)
15711 slot->set_lazy (snum << 2);
15712 }
15713
15714 dump.outdent ();
15715 if (!sec.end (src: from ()))
15716 return false;
15717 return true;
15718}
15719
15720/* Write the pending table to MOD_SNAME_PFX.pnd
15721
15722 The pending table holds information about clusters that need to be
15723 loaded because they contain information about something that is not
15724 found by namespace-scope lookup.
15725
15726 The three cases are:
15727
15728 (a) Template (maybe-partial) specializations that we have
15729 instantiated or defined. When an importer needs to instantiate
15730 that template, they /must have/ the partial, explicit & extern
15731 specializations available. If they have the other specializations
15732 available, they'll have less work to do. Thus, when we're about to
15733 instantiate FOO, we have to be able to ask 'are there any
15734 specialization of FOO in our imports?'.
15735
15736 (b) (Maybe-implicit) member functions definitions. A class could
15737 be defined in one header, and an inline member defined in a
15738 different header (this occurs in the STL). Similarly, like the
15739 specialization case, an implicit member function could have been
15740 'instantiated' in one module, and it'd be nice to not have to
15741 reinstantiate it in another.
15742
15743 (c) A member classes completed elsewhere. A member class could be
15744 declared in one header and defined in another. We need to know to
15745 load the class definition before looking in it. This turns out to
15746 be a specific case of #b, so we can treat these the same. But it
15747 does highlight an issue -- there could be an intermediate import
15748 between the outermost containing namespace-scope class and the
15749 innermost being-defined member class. This is actually possible
15750 with all of these cases, so be aware -- we're not just talking of
15751 one level of import to get to the innermost namespace.
15752
15753 This gets complicated fast, it took me multiple attempts to even
15754 get something remotely working. Partially because I focussed on
15755 optimizing what I think turns out to be a smaller problem, given
15756 the known need to do the more general case *anyway*. I document
15757 the smaller problem, because it does appear to be the natural way
15758 to do it. It's trap!
15759
15760 **** THE TRAP
15761
15762 Let's refer to the primary template or the containing class as the
15763 KEY. And the specialization or member as the PENDING-ENTITY. (To
15764 avoid having to say those mouthfuls all the time.)
15765
15766 In either case, we have an entity and we need some way of mapping
15767 that to a set of entities that need to be loaded before we can
15768 proceed with whatever processing of the entity we were going to do.
15769
15770 We need to link the key to the pending-entity in some way. Given a
15771 key, tell me the pending-entities I need to have loaded. However
15772 we tie the key to the pending-entity must not rely on the key being
15773 loaded -- that'd defeat the lazy loading scheme.
15774
15775 As the key will be an import in we know its entity number (either
15776 because we imported it, or we're writing it out too). Thus we can
15777 generate a map of key-indices to pending-entities. The
15778 pending-entity indices will be into our span of the entity table,
15779 and thus allow them to be lazily loaded. The key index will be
15780 into another slot of the entity table. Notice that this checking
15781 could be expensive, we don't want to iterate over a bunch of
15782 pending-entity indices (across multiple imports), every time we're
15783 about do to the thing with the key. We need to quickly determine
15784 'definitely nothing needed'.
15785
15786 That's almost good enough, except that key indices are not unique
15787 in a couple of cases :( Specifically the Global Module or a module
15788 partition can result in multiple modules assigning an entity index
15789 for the key. The decl-merging on loading will detect that so we
15790 only have one Key loaded, and in the entity hash it'll indicate the
15791 entity index of first load. Which might be different to how we
15792 know it. Notice this is restricted to GM entities or this-module
15793 entities. Foreign imports cannot have this.
15794
15795 We can simply resolve this in the direction of how this module
15796 referred to the key to how the importer knows it. Look in the
15797 entity table slot that we nominate, maybe lazy load it, and then
15798 lookup the resultant entity in the entity hash to learn how the
15799 importer knows it.
15800
15801 But we need to go in the other direction :( Given the key, find all
15802 the index-aliases of that key. We can partially solve that by
15803 adding an alias hash table. Whenever we load a merged decl, add or
15804 augment a mapping from the entity (or its entity-index) to the
15805 newly-discovered index. Then when we look for pending entities of
15806 a key, we also iterate over this aliases this mapping provides.
15807
15808 But that requires the alias to be loaded. And that's not
15809 necessarily true.
15810
15811 *** THE SIMPLER WAY
15812
15813 The remaining fixed thing we have is the innermost namespace
15814 containing the ultimate namespace-scope container of the key and
15815 the name of that container (which might be the key itself). I.e. a
15816 namespace-decl/identifier/module tuple. Let's call this the
15817 top-key. We'll discover that the module is not important here,
15818 because of cross-module possibilities mentioned in case #c above.
15819 We can't markup namespace-binding slots. The best we can do is
15820 mark the binding vector with 'there's something here', and have
15821 another map from namespace/identifier pairs to a vector of pending
15822 entity indices.
15823
15824 Maintain a pending-entity map. This is keyed by top-key, and
15825 maps to a vector of pending-entity indices. On the binding vector
15826 have flags saying whether the pending-name-entity map has contents.
15827 (We might want to further extend the key to be GM-vs-Partition and
15828 specialization-vs-member, but let's not get ahead of ourselves.)
15829
15830 For every key-like entity, find the outermost namespace-scope
15831 name. Use that to lookup in the pending-entity map and then make
15832 sure the specified entities are loaded.
15833
15834 An optimization might be to have a flag in each key-entity saying
15835 that its top key might be in the entity table. It's not clear to
15836 me how to set that flag cheaply -- cheaper than just looking.
15837
15838 FIXME: It'd be nice to have a bit in decls to tell us whether to
15839 even try this. We can have a 'already done' flag, that we set when
15840 we've done KLASS's lazy pendings. When we import a module that
15841 registers pendings on the same top-key as KLASS we need to clear
15842 the flag. A recursive walk of the top-key clearing the bit will
15843 suffice. Plus we only need to recurse on classes that have the bit
15844 set. (That means we need to set the bit on parents of KLASS here,
15845 don't forget.) However, first: correctness, second: efficiency. */
15846
15847unsigned
15848module_state::write_pendings (elf_out *to, vec<depset *> depsets,
15849 depset::hash &table, unsigned *crc_p)
15850{
15851 dump () && dump ("Writing pending-entities");
15852 dump.indent ();
15853
15854 trees_out sec (to, this, table);
15855 sec.begin ();
15856
15857 unsigned count = 0;
15858 tree cache_ns = NULL_TREE;
15859 tree cache_id = NULL_TREE;
15860 unsigned cache_section = ~0;
15861 for (unsigned ix = 0; ix < depsets.length (); ix++)
15862 {
15863 depset *d = depsets[ix];
15864
15865 if (d->is_binding ())
15866 continue;
15867
15868 if (d->is_import ())
15869 continue;
15870
15871 if (!(d->get_entity_kind () == depset::EK_SPECIALIZATION
15872 || d->get_entity_kind () == depset::EK_PARTIAL
15873 || (d->get_entity_kind () == depset::EK_DECL && d->is_member ())))
15874 continue;
15875
15876 tree key_decl = nullptr;
15877 tree key_ns = find_pending_key (decl: d->get_entity (), decl_p: &key_decl);
15878 tree key_name = DECL_NAME (key_decl);
15879
15880 if (IDENTIFIER_ANON_P (key_name))
15881 {
15882 gcc_checking_assert (IDENTIFIER_LAMBDA_P (key_name));
15883 if (tree attached = LAMBDA_TYPE_EXTRA_SCOPE (TREE_TYPE (key_decl)))
15884 key_name = DECL_NAME (attached);
15885 else
15886 {
15887 /* There's nothing to attach it to. Must
15888 always reinstantiate. */
15889 dump ()
15890 && dump ("Unattached lambda %N[%u] section:%u",
15891 d->get_entity_kind () == depset::EK_DECL
15892 ? "Member" : "Specialization", d->get_entity (),
15893 d->cluster, d->section);
15894 continue;
15895 }
15896 }
15897
15898 char const *also = "";
15899 if (d->section == cache_section
15900 && key_ns == cache_ns
15901 && key_name == cache_id)
15902 /* Same section & key as previous, no need to repeat ourselves. */
15903 also = "also ";
15904 else
15905 {
15906 cache_ns = key_ns;
15907 cache_id = key_name;
15908 cache_section = d->section;
15909 gcc_checking_assert (table.find_dependency (cache_ns));
15910 sec.tree_node (t: cache_ns);
15911 sec.tree_node (t: cache_id);
15912 sec.u (v: d->cluster);
15913 count++;
15914 }
15915 dump () && dump ("Pending %s %N entity:%u section:%u %skeyed to %P",
15916 d->get_entity_kind () == depset::EK_DECL
15917 ? "member" : "specialization", d->get_entity (),
15918 d->cluster, cache_section, also, cache_ns, cache_id);
15919 }
15920 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".pnd"), crc_ptr: crc_p);
15921 dump.outdent ();
15922
15923 return count;
15924}
15925
15926bool
15927module_state::read_pendings (unsigned count)
15928{
15929 trees_in sec (this);
15930
15931 if (!sec.begin (loc, source: from (), MOD_SNAME_PFX ".pnd"))
15932 return false;
15933
15934 dump () && dump ("Reading %u pendings", count);
15935 dump.indent ();
15936
15937 for (unsigned ix = 0; ix != count; ix++)
15938 {
15939 pending_key key;
15940 unsigned index;
15941
15942 key.ns = sec.tree_node ();
15943 key.id = sec.tree_node ();
15944 index = sec.u ();
15945
15946 if (!key.ns || !key.id
15947 || !(TREE_CODE (key.ns) == NAMESPACE_DECL
15948 && !DECL_NAMESPACE_ALIAS (key.ns))
15949 || !identifier_p (t: key.id)
15950 || index >= entity_num)
15951 sec.set_overrun ();
15952
15953 if (sec.get_overrun ())
15954 break;
15955
15956 dump () && dump ("Pending:%u keyed to %P", index, key.ns, key.id);
15957
15958 index += entity_lwm;
15959 auto &vec = pending_table->get_or_insert (k: key);
15960 vec.safe_push (obj: index);
15961 }
15962
15963 dump.outdent ();
15964 if (!sec.end (src: from ()))
15965 return false;
15966 return true;
15967}
15968
15969/* Read & write locations. */
15970enum loc_kind {
15971 LK_ORDINARY,
15972 LK_MACRO,
15973 LK_IMPORT_ORDINARY,
15974 LK_IMPORT_MACRO,
15975 LK_ADHOC,
15976 LK_RESERVED,
15977};
15978
15979static const module_state *
15980module_for_ordinary_loc (location_t loc)
15981{
15982 unsigned pos = 0;
15983 unsigned len = ool->length () - pos;
15984
15985 while (len)
15986 {
15987 unsigned half = len / 2;
15988 module_state *probe = (*ool)[pos + half];
15989 if (loc < probe->ordinary_locs.first)
15990 len = half;
15991 else if (loc < probe->ordinary_locs.first + probe->ordinary_locs.second)
15992 return probe;
15993 else
15994 {
15995 pos += half + 1;
15996 len = len - (half + 1);
15997 }
15998 }
15999
16000 return nullptr;
16001}
16002
16003static const module_state *
16004module_for_macro_loc (location_t loc)
16005{
16006 unsigned pos = 1;
16007 unsigned len = modules->length () - pos;
16008
16009 while (len)
16010 {
16011 unsigned half = len / 2;
16012 module_state *probe = (*modules)[pos + half];
16013 if (loc < probe->macro_locs.first)
16014 {
16015 pos += half + 1;
16016 len = len - (half + 1);
16017 }
16018 else if (loc >= probe->macro_locs.first + probe->macro_locs.second)
16019 len = half;
16020 else
16021 return probe;
16022 }
16023
16024 return NULL;
16025}
16026
16027location_t
16028module_state::imported_from () const
16029{
16030 location_t from = loc;
16031 line_map_ordinary const *fmap
16032 = linemap_check_ordinary (map: linemap_lookup (line_table, from));
16033
16034 if (MAP_MODULE_P (map: fmap))
16035 from = linemap_included_from (ord_map: fmap);
16036
16037 return from;
16038}
16039
16040/* Note that LOC will need writing. This allows us to prune locations
16041 that are not needed. */
16042
16043bool
16044module_state::note_location (location_t loc)
16045{
16046 bool added = false;
16047 if (!macro_loc_table && !ord_loc_table)
16048 ;
16049 else if (loc < RESERVED_LOCATION_COUNT)
16050 ;
16051 else if (IS_ADHOC_LOC (loc))
16052 {
16053 location_t locus = get_location_from_adhoc_loc (line_table, loc);
16054 note_location (loc: locus);
16055 source_range range = get_range_from_loc (set: line_table, loc);
16056 if (range.m_start != locus)
16057 note_location (loc: range.m_start);
16058 note_location (loc: range.m_finish);
16059 }
16060 else if (loc >= LINEMAPS_MACRO_LOWEST_LOCATION (set: line_table))
16061 {
16062 if (spans.macro (loc))
16063 {
16064 const line_map *map = linemap_lookup (line_table, loc);
16065 const line_map_macro *mac_map = linemap_check_macro (map);
16066 hashval_t hv = macro_loc_traits::hash (p: mac_map);
16067 macro_loc_info *slot
16068 = macro_loc_table->find_slot_with_hash (comparable: mac_map, hash: hv, insert: INSERT);
16069 if (!slot->src)
16070 {
16071 slot->src = mac_map;
16072 slot->remap = 0;
16073 // Expansion locations could themselves be from a
16074 // macro, we need to note them all.
16075 note_location (loc: mac_map->m_expansion);
16076 gcc_checking_assert (mac_map->n_tokens);
16077 location_t tloc = UNKNOWN_LOCATION;
16078 for (unsigned ix = mac_map->n_tokens * 2; ix--;)
16079 if (mac_map->macro_locations[ix] != tloc)
16080 {
16081 tloc = mac_map->macro_locations[ix];
16082 note_location (loc: tloc);
16083 }
16084 added = true;
16085 }
16086 }
16087 }
16088 else if (IS_ORDINARY_LOC (loc))
16089 {
16090 if (spans.ordinary (loc))
16091 {
16092 const line_map *map = linemap_lookup (line_table, loc);
16093 const line_map_ordinary *ord_map = linemap_check_ordinary (map);
16094 ord_loc_info lkup;
16095 lkup.src = ord_map;
16096 lkup.span = 1 << ord_map->m_column_and_range_bits;
16097 lkup.offset = (loc - MAP_START_LOCATION (map: ord_map)) & ~(lkup.span - 1);
16098 lkup.remap = 0;
16099 ord_loc_info *slot = (ord_loc_table->find_slot_with_hash
16100 (comparable: lkup, hash: ord_loc_traits::hash (v: lkup), insert: INSERT));
16101 if (!slot->src)
16102 {
16103 *slot = lkup;
16104 added = true;
16105 }
16106 }
16107 }
16108 else
16109 gcc_unreachable ();
16110 return added;
16111}
16112
16113/* If we're not streaming, record that we need location LOC.
16114 Otherwise stream it. */
16115
16116void
16117module_state::write_location (bytes_out &sec, location_t loc)
16118{
16119 if (!sec.streaming_p ())
16120 {
16121 note_location (loc);
16122 return;
16123 }
16124
16125 if (loc < RESERVED_LOCATION_COUNT)
16126 {
16127 dump (dumper::LOCATION) && dump ("Reserved location %u", unsigned (loc));
16128 sec.u (v: LK_RESERVED + loc);
16129 }
16130 else if (IS_ADHOC_LOC (loc))
16131 {
16132 dump (dumper::LOCATION) && dump ("Adhoc location");
16133 sec.u (v: LK_ADHOC);
16134 location_t locus = get_location_from_adhoc_loc (line_table, loc);
16135 write_location (sec, loc: locus);
16136 source_range range = get_range_from_loc (set: line_table, loc);
16137 if (range.m_start == locus)
16138 /* Compress. */
16139 range.m_start = UNKNOWN_LOCATION;
16140 write_location (sec, loc: range.m_start);
16141 write_location (sec, loc: range.m_finish);
16142 unsigned discriminator = get_discriminator_from_adhoc_loc (line_table, loc);
16143 sec.u (v: discriminator);
16144 }
16145 else if (loc >= LINEMAPS_MACRO_LOWEST_LOCATION (set: line_table))
16146 {
16147 const macro_loc_info *info = nullptr;
16148 unsigned offset = 0;
16149 if (unsigned hwm = macro_loc_remap->length ())
16150 {
16151 info = macro_loc_remap->begin ();
16152 while (hwm != 1)
16153 {
16154 unsigned mid = hwm / 2;
16155 if (MAP_START_LOCATION (map: info[mid].src) <= loc)
16156 {
16157 info += mid;
16158 hwm -= mid;
16159 }
16160 else
16161 hwm = mid;
16162 }
16163 offset = loc - MAP_START_LOCATION (map: info->src);
16164 if (offset > info->src->n_tokens)
16165 info = nullptr;
16166 }
16167
16168 gcc_checking_assert (bool (info) == bool (spans.macro (loc)));
16169
16170 if (info)
16171 {
16172 offset += info->remap;
16173 sec.u (v: LK_MACRO);
16174 sec.u (v: offset);
16175 dump (dumper::LOCATION)
16176 && dump ("Macro location %u output %u", loc, offset);
16177 }
16178 else if (const module_state *import = module_for_macro_loc (loc))
16179 {
16180 unsigned off = loc - import->macro_locs.first;
16181 sec.u (v: LK_IMPORT_MACRO);
16182 sec.u (v: import->remap);
16183 sec.u (v: off);
16184 dump (dumper::LOCATION)
16185 && dump ("Imported macro location %u output %u:%u",
16186 loc, import->remap, off);
16187 }
16188 else
16189 gcc_unreachable ();
16190 }
16191 else if (IS_ORDINARY_LOC (loc))
16192 {
16193 const ord_loc_info *info = nullptr;
16194 unsigned offset = 0;
16195 if (unsigned hwm = ord_loc_remap->length ())
16196 {
16197 info = ord_loc_remap->begin ();
16198 while (hwm != 1)
16199 {
16200 unsigned mid = hwm / 2;
16201 if (MAP_START_LOCATION (map: info[mid].src) + info[mid].offset <= loc)
16202 {
16203 info += mid;
16204 hwm -= mid;
16205 }
16206 else
16207 hwm = mid;
16208 }
16209 offset = loc - MAP_START_LOCATION (map: info->src) - info->offset;
16210 if (offset > info->span)
16211 info = nullptr;
16212 }
16213
16214 gcc_checking_assert (bool (info) == bool (spans.ordinary (loc)));
16215
16216 if (info)
16217 {
16218 offset += info->remap;
16219 sec.u (v: LK_ORDINARY);
16220 sec.u (v: offset);
16221
16222 dump (dumper::LOCATION)
16223 && dump ("Ordinary location %u output %u", loc, offset);
16224 }
16225 else if (const module_state *import = module_for_ordinary_loc (loc))
16226 {
16227 unsigned off = loc - import->ordinary_locs.first;
16228 sec.u (v: LK_IMPORT_ORDINARY);
16229 sec.u (v: import->remap);
16230 sec.u (v: off);
16231 dump (dumper::LOCATION)
16232 && dump ("Imported ordinary location %u output %u:%u",
16233 import->remap, import->remap, off);
16234 }
16235 else
16236 gcc_unreachable ();
16237 }
16238 else
16239 gcc_unreachable ();
16240}
16241
16242location_t
16243module_state::read_location (bytes_in &sec) const
16244{
16245 location_t locus = UNKNOWN_LOCATION;
16246 unsigned kind = sec.u ();
16247 switch (kind)
16248 {
16249 default:
16250 {
16251 if (kind < LK_RESERVED + RESERVED_LOCATION_COUNT)
16252 locus = location_t (kind - LK_RESERVED);
16253 else
16254 sec.set_overrun ();
16255 dump (dumper::LOCATION)
16256 && dump ("Reserved location %u", unsigned (locus));
16257 }
16258 break;
16259
16260 case LK_ADHOC:
16261 {
16262 dump (dumper::LOCATION) && dump ("Adhoc location");
16263 locus = read_location (sec);
16264 source_range range;
16265 range.m_start = read_location (sec);
16266 if (range.m_start == UNKNOWN_LOCATION)
16267 range.m_start = locus;
16268 range.m_finish = read_location (sec);
16269 unsigned discriminator = sec.u ();
16270 if (locus != loc && range.m_start != loc && range.m_finish != loc)
16271 locus = line_table->get_or_create_combined_loc (locus, src_range: range,
16272 data: nullptr, discriminator);
16273 }
16274 break;
16275
16276 case LK_MACRO:
16277 {
16278 unsigned off = sec.u ();
16279
16280 if (macro_locs.second)
16281 {
16282 if (off < macro_locs.second)
16283 locus = off + macro_locs.first;
16284 else
16285 sec.set_overrun ();
16286 }
16287 else
16288 locus = loc;
16289 dump (dumper::LOCATION)
16290 && dump ("Macro %u becoming %u", off, locus);
16291 }
16292 break;
16293
16294 case LK_ORDINARY:
16295 {
16296 unsigned off = sec.u ();
16297 if (ordinary_locs.second)
16298 {
16299 if (off < ordinary_locs.second)
16300 locus = off + ordinary_locs.first;
16301 else
16302 sec.set_overrun ();
16303 }
16304 else
16305 locus = loc;
16306
16307 dump (dumper::LOCATION)
16308 && dump ("Ordinary location %u becoming %u", off, locus);
16309 }
16310 break;
16311
16312 case LK_IMPORT_MACRO:
16313 case LK_IMPORT_ORDINARY:
16314 {
16315 unsigned mod = sec.u ();
16316 unsigned off = sec.u ();
16317 const module_state *import = NULL;
16318
16319 if (!mod && !slurp->remap)
16320 /* This is an early read of a partition location during the
16321 read of our ordinary location map. */
16322 import = this;
16323 else
16324 {
16325 mod = slurp->remap_module (owner: mod);
16326 if (!mod)
16327 sec.set_overrun ();
16328 else
16329 import = (*modules)[mod];
16330 }
16331
16332 if (import)
16333 {
16334 if (kind == LK_IMPORT_MACRO)
16335 {
16336 if (!import->macro_locs.second)
16337 locus = import->loc;
16338 else if (off < import->macro_locs.second)
16339 locus = off + import->macro_locs.first;
16340 else
16341 sec.set_overrun ();
16342 }
16343 else
16344 {
16345 if (!import->ordinary_locs.second)
16346 locus = import->loc;
16347 else if (off < import->ordinary_locs.second)
16348 locus = import->ordinary_locs.first + off;
16349 else
16350 sec.set_overrun ();
16351 }
16352 }
16353 }
16354 break;
16355 }
16356
16357 return locus;
16358}
16359
16360/* Allocate hash tables to record needed locations. */
16361
16362void
16363module_state::write_init_maps ()
16364{
16365 macro_loc_table = new hash_table<macro_loc_traits> (EXPERIMENT (1, 400));
16366 ord_loc_table = new hash_table<ord_loc_traits> (EXPERIMENT (1, 400));
16367}
16368
16369/* Prepare the span adjustments. We prune unneeded locations -- at
16370 this point every needed location must have been seen by
16371 note_location. */
16372
16373range_t
16374module_state::write_prepare_maps (module_state_config *cfg, bool has_partitions)
16375{
16376 dump () && dump ("Preparing locations");
16377 dump.indent ();
16378
16379 dump () && dump ("Reserved locations [%u,%u) macro [%u,%u)",
16380 spans[loc_spans::SPAN_RESERVED].ordinary.first,
16381 spans[loc_spans::SPAN_RESERVED].ordinary.second,
16382 spans[loc_spans::SPAN_RESERVED].macro.first,
16383 spans[loc_spans::SPAN_RESERVED].macro.second);
16384
16385 range_t info {0, 0};
16386
16387 // Sort the noted lines.
16388 vec_alloc (v&: ord_loc_remap, nelems: ord_loc_table->size ());
16389 for (auto iter = ord_loc_table->begin (), end = ord_loc_table->end ();
16390 iter != end; ++iter)
16391 ord_loc_remap->quick_push (obj: *iter);
16392 ord_loc_remap->qsort (&ord_loc_info::compare);
16393
16394 // Note included-from maps.
16395 bool added = false;
16396 const line_map_ordinary *current = nullptr;
16397 for (auto iter = ord_loc_remap->begin (), end = ord_loc_remap->end ();
16398 iter != end; ++iter)
16399 if (iter->src != current)
16400 {
16401 current = iter->src;
16402 for (auto probe = current;
16403 auto from = linemap_included_from (ord_map: probe);
16404 probe = linemap_check_ordinary (map: linemap_lookup (line_table, from)))
16405 {
16406 if (has_partitions)
16407 {
16408 // Partition locations need to elide their module map
16409 // entry.
16410 probe
16411 = linemap_check_ordinary (map: linemap_lookup (line_table, from));
16412 if (MAP_MODULE_P (map: probe))
16413 from = linemap_included_from (ord_map: probe);
16414 }
16415
16416 if (!note_location (loc: from))
16417 break;
16418 added = true;
16419 }
16420 }
16421 if (added)
16422 {
16423 // Reconstruct the line array as we added items to the hash table.
16424 vec_free (v&: ord_loc_remap);
16425 vec_alloc (v&: ord_loc_remap, nelems: ord_loc_table->size ());
16426 for (auto iter = ord_loc_table->begin (), end = ord_loc_table->end ();
16427 iter != end; ++iter)
16428 ord_loc_remap->quick_push (obj: *iter);
16429 ord_loc_remap->qsort (&ord_loc_info::compare);
16430 }
16431 delete ord_loc_table;
16432 ord_loc_table = nullptr;
16433
16434 // Merge (sufficiently) adjacent spans, and calculate remapping.
16435 constexpr unsigned adjacency = 2; // Allow 2 missing lines.
16436 auto begin = ord_loc_remap->begin (), end = ord_loc_remap->end ();
16437 auto dst = begin;
16438 unsigned offset = 0, range_bits = 0;
16439 ord_loc_info *base = nullptr;
16440 for (auto iter = begin; iter != end; ++iter)
16441 {
16442 if (base && iter->src == base->src)
16443 {
16444 if (base->offset + base->span +
16445 ((adjacency << base->src->m_column_and_range_bits)
16446 // If there are few c&r bits, allow further separation.
16447 | (adjacency << 4))
16448 >= iter->offset)
16449 {
16450 // Merge.
16451 offset -= base->span;
16452 base->span = iter->offset + iter->span - base->offset;
16453 offset += base->span;
16454 continue;
16455 }
16456 }
16457 else if (range_bits < iter->src->m_range_bits)
16458 range_bits = iter->src->m_range_bits;
16459
16460 offset += ((1u << iter->src->m_range_bits) - 1);
16461 offset &= ~((1u << iter->src->m_range_bits) - 1);
16462 iter->remap = offset;
16463 offset += iter->span;
16464 base = dst;
16465 *dst++ = *iter;
16466 }
16467 ord_loc_remap->truncate (size: dst - begin);
16468
16469 info.first = ord_loc_remap->length ();
16470 cfg->ordinary_locs = offset;
16471 cfg->loc_range_bits = range_bits;
16472 dump () && dump ("Ordinary maps:%u locs:%u range_bits:%u",
16473 info.first, cfg->ordinary_locs,
16474 cfg->loc_range_bits);
16475
16476 // Remap the macro locations.
16477 vec_alloc (v&: macro_loc_remap, nelems: macro_loc_table->size ());
16478 for (auto iter = macro_loc_table->begin (), end = macro_loc_table->end ();
16479 iter != end; ++iter)
16480 macro_loc_remap->quick_push (obj: *iter);
16481 delete macro_loc_table;
16482 macro_loc_table = nullptr;
16483
16484 macro_loc_remap->qsort (&macro_loc_info::compare);
16485 offset = 0;
16486 for (auto iter = macro_loc_remap->begin (), end = macro_loc_remap->end ();
16487 iter != end; ++iter)
16488 {
16489 auto mac = iter->src;
16490 iter->remap = offset;
16491 offset += mac->n_tokens;
16492 }
16493 info.second = macro_loc_remap->length ();
16494 cfg->macro_locs = offset;
16495
16496 dump () && dump ("Macro maps:%u locs:%u", info.second, cfg->macro_locs);
16497
16498 dump.outdent ();
16499
16500 // If we have no ordinary locs, we must also have no macro locs.
16501 gcc_checking_assert (cfg->ordinary_locs || !cfg->macro_locs);
16502
16503 return info;
16504}
16505
16506bool
16507module_state::read_prepare_maps (const module_state_config *cfg)
16508{
16509 location_t ordinary = line_table->highest_location + 1;
16510 ordinary += cfg->ordinary_locs;
16511
16512 location_t macro = LINEMAPS_MACRO_LOWEST_LOCATION (set: line_table);
16513 macro -= cfg->macro_locs;
16514
16515 if (ordinary < LINE_MAP_MAX_LOCATION_WITH_COLS
16516 && macro >= LINE_MAP_MAX_LOCATION)
16517 /* OK, we have enough locations. */
16518 return true;
16519
16520 ordinary_locs.first = ordinary_locs.second = 0;
16521 macro_locs.first = macro_locs.second = 0;
16522
16523 static bool informed = false;
16524 if (!informed)
16525 {
16526 /* Just give the notice once. */
16527 informed = true;
16528 inform (loc, "unable to represent further imported source locations");
16529 }
16530
16531 return false;
16532}
16533
16534/* Write & read the location maps. Not called if there are no
16535 locations. */
16536
16537void
16538module_state::write_ordinary_maps (elf_out *to, range_t &info,
16539 bool has_partitions, unsigned *crc_p)
16540{
16541 dump () && dump ("Writing ordinary location maps");
16542 dump.indent ();
16543
16544 vec<const char *> filenames;
16545 filenames.create (nelems: 20);
16546
16547 /* Determine the unique filenames. */
16548 const line_map_ordinary *current = nullptr;
16549 for (auto iter = ord_loc_remap->begin (), end = ord_loc_remap->end ();
16550 iter != end; ++iter)
16551 if (iter->src != current)
16552 {
16553 current = iter->src;
16554 const char *fname = ORDINARY_MAP_FILE_NAME (ord_map: iter->src);
16555
16556 /* We should never find a module linemap in an interval. */
16557 gcc_checking_assert (!MAP_MODULE_P (iter->src));
16558
16559 /* We expect very few filenames, so just an array.
16560 (Not true when headers are still in play :() */
16561 for (unsigned jx = filenames.length (); jx--;)
16562 {
16563 const char *name = filenames[jx];
16564 if (0 == strcmp (s1: name, s2: fname))
16565 {
16566 /* Reset the linemap's name, because for things like
16567 preprocessed input we could have multiple instances
16568 of the same name, and we'd rather not percolate
16569 that. */
16570 const_cast<line_map_ordinary *> (iter->src)->to_file = name;
16571 fname = NULL;
16572 break;
16573 }
16574 }
16575 if (fname)
16576 filenames.safe_push (obj: fname);
16577 }
16578
16579 bytes_out sec (to);
16580 sec.begin ();
16581
16582 /* Write the filenames. */
16583 unsigned len = filenames.length ();
16584 sec.u (v: len);
16585 dump () && dump ("%u source file names", len);
16586 for (unsigned ix = 0; ix != len; ix++)
16587 {
16588 const char *fname = filenames[ix];
16589 dump (dumper::LOCATION) && dump ("Source file[%u]=%s", ix, fname);
16590 sec.str (ptr: fname);
16591 }
16592
16593 sec.u (v: info.first); /* Num maps. */
16594 const ord_loc_info *base = nullptr;
16595 for (auto iter = ord_loc_remap->begin (), end = ord_loc_remap->end ();
16596 iter != end; ++iter)
16597 {
16598 dump (dumper::LOCATION)
16599 && dump ("Span:%u ordinary [%u+%u,+%u)->[%u,+%u)",
16600 iter - ord_loc_remap->begin (),
16601 MAP_START_LOCATION (map: iter->src), iter->offset, iter->span,
16602 iter->remap, iter->span);
16603
16604 if (!base || iter->src != base->src)
16605 base = iter;
16606 sec.u (v: iter->offset - base->offset);
16607 if (base == iter)
16608 {
16609 sec.u (v: iter->src->sysp);
16610 sec.u (v: iter->src->m_range_bits);
16611 sec.u (v: iter->src->m_column_and_range_bits - iter->src->m_range_bits);
16612
16613 const char *fname = ORDINARY_MAP_FILE_NAME (ord_map: iter->src);
16614 for (unsigned ix = 0; ix != filenames.length (); ix++)
16615 if (filenames[ix] == fname)
16616 {
16617 sec.u (v: ix);
16618 break;
16619 }
16620 unsigned line = ORDINARY_MAP_STARTING_LINE_NUMBER (ord_map: iter->src);
16621 line += iter->offset >> iter->src->m_column_and_range_bits;
16622 sec.u (v: line);
16623 }
16624 sec.u (v: iter->remap);
16625 if (base == iter)
16626 {
16627 /* Write the included from location, which means reading it
16628 while reading in the ordinary maps. So we'd better not
16629 be getting ahead of ourselves. */
16630 location_t from = linemap_included_from (ord_map: iter->src);
16631 gcc_checking_assert (from < MAP_START_LOCATION (iter->src));
16632 if (from != UNKNOWN_LOCATION && has_partitions)
16633 {
16634 /* A partition's span will have a from pointing at a
16635 MODULE_INC. Find that map's from. */
16636 line_map_ordinary const *fmap
16637 = linemap_check_ordinary (map: linemap_lookup (line_table, from));
16638 if (MAP_MODULE_P (map: fmap))
16639 from = linemap_included_from (ord_map: fmap);
16640 }
16641 write_location (sec, loc: from);
16642 }
16643 }
16644
16645 filenames.release ();
16646
16647 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".olm"), crc_ptr: crc_p);
16648 dump.outdent ();
16649}
16650
16651void
16652module_state::write_macro_maps (elf_out *to, range_t &info, unsigned *crc_p)
16653{
16654 dump () && dump ("Writing macro location maps");
16655 dump.indent ();
16656
16657 bytes_out sec (to);
16658 sec.begin ();
16659
16660 dump () && dump ("Macro maps:%u", info.second);
16661 sec.u (v: info.second);
16662
16663 unsigned macro_num = 0;
16664 for (auto iter = macro_loc_remap->end (), begin = macro_loc_remap->begin ();
16665 iter-- != begin;)
16666 {
16667 auto mac = iter->src;
16668 sec.u (v: iter->remap);
16669 sec.u (v: mac->n_tokens);
16670 sec.cpp_node (node: mac->macro);
16671 write_location (sec, loc: mac->m_expansion);
16672 const location_t *locs = mac->macro_locations;
16673 /* There are lots of identical runs. */
16674 location_t prev = UNKNOWN_LOCATION;
16675 unsigned count = 0;
16676 unsigned runs = 0;
16677 for (unsigned jx = mac->n_tokens * 2; jx--;)
16678 {
16679 location_t tok_loc = locs[jx];
16680 if (tok_loc == prev)
16681 {
16682 count++;
16683 continue;
16684 }
16685 runs++;
16686 sec.u (v: count);
16687 count = 1;
16688 prev = tok_loc;
16689 write_location (sec, loc: tok_loc);
16690 }
16691 sec.u (v: count);
16692 dump (dumper::LOCATION)
16693 && dump ("Macro:%u %I %u/%u*2 locations [%u,%u)->%u",
16694 macro_num, identifier (node: mac->macro),
16695 runs, mac->n_tokens,
16696 MAP_START_LOCATION (map: mac),
16697 MAP_START_LOCATION (map: mac) + mac->n_tokens,
16698 iter->remap);
16699 macro_num++;
16700 }
16701 gcc_assert (macro_num == info.second);
16702
16703 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".mlm"), crc_ptr: crc_p);
16704 dump.outdent ();
16705}
16706
16707bool
16708module_state::read_ordinary_maps (unsigned num_ord_locs, unsigned range_bits)
16709{
16710 bytes_in sec;
16711
16712 if (!sec.begin (loc, source: from (), MOD_SNAME_PFX ".olm"))
16713 return false;
16714 dump () && dump ("Reading ordinary location maps");
16715 dump.indent ();
16716
16717 /* Read the filename table. */
16718 unsigned len = sec.u ();
16719 dump () && dump ("%u source file names", len);
16720 vec<const char *> filenames;
16721 filenames.create (nelems: len);
16722 for (unsigned ix = 0; ix != len; ix++)
16723 {
16724 size_t l;
16725 const char *buf = sec.str (len_p: &l);
16726 char *fname = XNEWVEC (char, l + 1);
16727 memcpy (dest: fname, src: buf, n: l + 1);
16728 dump (dumper::LOCATION) && dump ("Source file[%u]=%s", ix, fname);
16729 /* We leak these names into the line-map table. But it
16730 doesn't own them. */
16731 filenames.quick_push (obj: fname);
16732 }
16733
16734 unsigned num_ordinary = sec.u ();
16735 dump () && dump ("Ordinary maps:%u, range_bits:%u", num_ordinary, range_bits);
16736
16737 location_t offset = line_table->highest_location + 1;
16738 offset += ((1u << range_bits) - 1);
16739 offset &= ~((1u << range_bits) - 1);
16740 ordinary_locs.first = offset;
16741
16742 bool propagated = spans.maybe_propagate (import: this, hwm: offset);
16743 line_map_ordinary *maps = static_cast<line_map_ordinary *>
16744 (line_map_new_raw (line_table, false, num_ordinary));
16745
16746 const line_map_ordinary *base = nullptr;
16747 for (unsigned ix = 0; ix != num_ordinary && !sec.get_overrun (); ix++)
16748 {
16749 line_map_ordinary *map = &maps[ix];
16750
16751 unsigned offset = sec.u ();
16752 if (!offset)
16753 {
16754 map->reason = LC_RENAME;
16755 map->sysp = sec.u ();
16756 map->m_range_bits = sec.u ();
16757 map->m_column_and_range_bits = sec.u () + map->m_range_bits;
16758 unsigned fnum = sec.u ();
16759 map->to_file = (fnum < filenames.length () ? filenames[fnum] : "");
16760 map->to_line = sec.u ();
16761 base = map;
16762 }
16763 else
16764 {
16765 *map = *base;
16766 map->to_line += offset >> map->m_column_and_range_bits;
16767 }
16768 unsigned remap = sec.u ();
16769 map->start_location = remap + ordinary_locs.first;
16770 if (base == map)
16771 {
16772 /* Root the outermost map at our location. */
16773 ordinary_locs.second = remap;
16774 location_t from = read_location (sec);
16775 map->included_from = from != UNKNOWN_LOCATION ? from : loc;
16776 }
16777 }
16778
16779 ordinary_locs.second = num_ord_locs;
16780 /* highest_location is the one handed out, not the next one to
16781 hand out. */
16782 line_table->highest_location = ordinary_locs.first + ordinary_locs.second - 1;
16783
16784 if (line_table->highest_location >= LINE_MAP_MAX_LOCATION_WITH_COLS)
16785 /* We shouldn't run out of locations, as we checked before
16786 starting. */
16787 sec.set_overrun ();
16788 dump () && dump ("Ordinary location [%u,+%u)",
16789 ordinary_locs.first, ordinary_locs.second);
16790
16791 if (propagated)
16792 spans.close ();
16793
16794 filenames.release ();
16795
16796 dump.outdent ();
16797 if (!sec.end (src: from ()))
16798 return false;
16799
16800 return true;
16801}
16802
16803bool
16804module_state::read_macro_maps (unsigned num_macro_locs)
16805{
16806 bytes_in sec;
16807
16808 if (!sec.begin (loc, source: from (), MOD_SNAME_PFX ".mlm"))
16809 return false;
16810 dump () && dump ("Reading macro location maps");
16811 dump.indent ();
16812
16813 unsigned num_macros = sec.u ();
16814 dump () && dump ("Macro maps:%u locs:%u", num_macros, num_macro_locs);
16815
16816 bool propagated = spans.maybe_propagate (import: this,
16817 hwm: line_table->highest_location + 1);
16818
16819 location_t offset = LINEMAPS_MACRO_LOWEST_LOCATION (set: line_table);
16820 macro_locs.second = num_macro_locs;
16821 macro_locs.first = offset - num_macro_locs;
16822
16823 dump () && dump ("Macro loc delta %d", offset);
16824 dump () && dump ("Macro locations [%u,%u)",
16825 macro_locs.first, macro_locs.second);
16826
16827 for (unsigned ix = 0; ix != num_macros && !sec.get_overrun (); ix++)
16828 {
16829 unsigned offset = sec.u ();
16830 unsigned n_tokens = sec.u ();
16831 cpp_hashnode *node = sec.cpp_node ();
16832 location_t exp_loc = read_location (sec);
16833
16834 const line_map_macro *macro
16835 = linemap_enter_macro (line_table, node, exp_loc, n_tokens);
16836 if (!macro)
16837 /* We shouldn't run out of locations, as we checked that we
16838 had enough before starting. */
16839 break;
16840 gcc_checking_assert (MAP_START_LOCATION (macro)
16841 == offset + macro_locs.first);
16842
16843 location_t *locs = macro->macro_locations;
16844 location_t tok_loc = UNKNOWN_LOCATION;
16845 unsigned count = sec.u ();
16846 unsigned runs = 0;
16847 for (unsigned jx = macro->n_tokens * 2; jx-- && !sec.get_overrun ();)
16848 {
16849 while (!count-- && !sec.get_overrun ())
16850 {
16851 runs++;
16852 tok_loc = read_location (sec);
16853 count = sec.u ();
16854 }
16855 locs[jx] = tok_loc;
16856 }
16857 if (count)
16858 sec.set_overrun ();
16859 dump (dumper::LOCATION)
16860 && dump ("Macro:%u %I %u/%u*2 locations [%u,%u)",
16861 ix, identifier (node), runs, n_tokens,
16862 MAP_START_LOCATION (map: macro),
16863 MAP_START_LOCATION (map: macro) + n_tokens);
16864 }
16865
16866 dump () && dump ("Macro location lwm:%u", macro_locs.first);
16867 if (propagated)
16868 spans.close ();
16869
16870 dump.outdent ();
16871 if (!sec.end (src: from ()))
16872 return false;
16873
16874 return true;
16875}
16876
16877/* Serialize the definition of MACRO. */
16878
16879void
16880module_state::write_define (bytes_out &sec, const cpp_macro *macro)
16881{
16882 sec.u (v: macro->count);
16883
16884 bytes_out::bits_out bits = sec.stream_bits ();
16885 bits.b (x: macro->fun_like);
16886 bits.b (x: macro->variadic);
16887 bits.b (x: macro->syshdr);
16888 bits.bflush ();
16889
16890 write_location (sec, loc: macro->line);
16891 if (macro->fun_like)
16892 {
16893 sec.u (v: macro->paramc);
16894 const cpp_hashnode *const *parms = macro->parm.params;
16895 for (unsigned ix = 0; ix != macro->paramc; ix++)
16896 sec.cpp_node (node: parms[ix]);
16897 }
16898
16899 unsigned len = 0;
16900 for (unsigned ix = 0; ix != macro->count; ix++)
16901 {
16902 const cpp_token *token = &macro->exp.tokens[ix];
16903 write_location (sec, loc: token->src_loc);
16904 sec.u (v: token->type);
16905 sec.u (v: token->flags);
16906 switch (cpp_token_val_index (tok: token))
16907 {
16908 default:
16909 gcc_unreachable ();
16910
16911 case CPP_TOKEN_FLD_ARG_NO:
16912 /* An argument reference. */
16913 sec.u (v: token->val.macro_arg.arg_no);
16914 sec.cpp_node (node: token->val.macro_arg.spelling);
16915 break;
16916
16917 case CPP_TOKEN_FLD_NODE:
16918 /* An identifier. */
16919 sec.cpp_node (node: token->val.node.node);
16920 if (token->val.node.spelling == token->val.node.node)
16921 /* The spelling will usually be the same. so optimize
16922 that. */
16923 sec.str (NULL, len: 0);
16924 else
16925 sec.cpp_node (node: token->val.node.spelling);
16926 break;
16927
16928 case CPP_TOKEN_FLD_NONE:
16929 break;
16930
16931 case CPP_TOKEN_FLD_STR:
16932 /* A string, number or comment. Not always NUL terminated,
16933 we stream out in a single contatenation with embedded
16934 NULs as that's a safe default. */
16935 len += token->val.str.len + 1;
16936 sec.u (v: token->val.str.len);
16937 break;
16938
16939 case CPP_TOKEN_FLD_SOURCE:
16940 case CPP_TOKEN_FLD_TOKEN_NO:
16941 case CPP_TOKEN_FLD_PRAGMA:
16942 /* These do not occur inside a macro itself. */
16943 gcc_unreachable ();
16944 }
16945 }
16946
16947 if (len)
16948 {
16949 char *ptr = reinterpret_cast<char *> (sec.buf (len));
16950 len = 0;
16951 for (unsigned ix = 0; ix != macro->count; ix++)
16952 {
16953 const cpp_token *token = &macro->exp.tokens[ix];
16954 if (cpp_token_val_index (tok: token) == CPP_TOKEN_FLD_STR)
16955 {
16956 memcpy (dest: ptr + len, src: token->val.str.text,
16957 n: token->val.str.len);
16958 len += token->val.str.len;
16959 ptr[len++] = 0;
16960 }
16961 }
16962 }
16963}
16964
16965/* Read a macro definition. */
16966
16967cpp_macro *
16968module_state::read_define (bytes_in &sec, cpp_reader *reader) const
16969{
16970 unsigned count = sec.u ();
16971 /* We rely on knowing cpp_reader's hash table is ident_hash, and
16972 its subobject allocator is stringpool_ggc_alloc and that is just
16973 a wrapper for ggc_alloc_atomic. */
16974 cpp_macro *macro
16975 = (cpp_macro *)ggc_alloc_atomic (s: sizeof (cpp_macro)
16976 + sizeof (cpp_token) * (count - !!count));
16977 memset (s: macro, c: 0, n: sizeof (cpp_macro) + sizeof (cpp_token) * (count - !!count));
16978
16979 macro->count = count;
16980 macro->kind = cmk_macro;
16981 macro->imported_p = true;
16982
16983 bytes_in::bits_in bits = sec.stream_bits ();
16984 macro->fun_like = bits.b ();
16985 macro->variadic = bits.b ();
16986 macro->syshdr = bits.b ();
16987 bits.bflush ();
16988
16989 macro->line = read_location (sec);
16990
16991 if (macro->fun_like)
16992 {
16993 unsigned paramc = sec.u ();
16994 cpp_hashnode **params
16995 = (cpp_hashnode **)ggc_alloc_atomic (s: sizeof (cpp_hashnode *) * paramc);
16996 macro->paramc = paramc;
16997 macro->parm.params = params;
16998 for (unsigned ix = 0; ix != paramc; ix++)
16999 params[ix] = sec.cpp_node ();
17000 }
17001
17002 unsigned len = 0;
17003 for (unsigned ix = 0; ix != count && !sec.get_overrun (); ix++)
17004 {
17005 cpp_token *token = &macro->exp.tokens[ix];
17006 token->src_loc = read_location (sec);
17007 token->type = cpp_ttype (sec.u ());
17008 token->flags = sec.u ();
17009 switch (cpp_token_val_index (tok: token))
17010 {
17011 default:
17012 sec.set_overrun ();
17013 break;
17014
17015 case CPP_TOKEN_FLD_ARG_NO:
17016 /* An argument reference. */
17017 {
17018 unsigned arg_no = sec.u ();
17019 if (arg_no - 1 >= macro->paramc)
17020 sec.set_overrun ();
17021 token->val.macro_arg.arg_no = arg_no;
17022 token->val.macro_arg.spelling = sec.cpp_node ();
17023 }
17024 break;
17025
17026 case CPP_TOKEN_FLD_NODE:
17027 /* An identifier. */
17028 token->val.node.node = sec.cpp_node ();
17029 token->val.node.spelling = sec.cpp_node ();
17030 if (!token->val.node.spelling)
17031 token->val.node.spelling = token->val.node.node;
17032 break;
17033
17034 case CPP_TOKEN_FLD_NONE:
17035 break;
17036
17037 case CPP_TOKEN_FLD_STR:
17038 /* A string, number or comment. */
17039 token->val.str.len = sec.u ();
17040 len += token->val.str.len + 1;
17041 break;
17042 }
17043 }
17044
17045 if (len)
17046 if (const char *ptr = reinterpret_cast<const char *> (sec.buf (len)))
17047 {
17048 /* There should be a final NUL. */
17049 if (ptr[len-1])
17050 sec.set_overrun ();
17051 /* cpp_alloc_token_string will add a final NUL. */
17052 const unsigned char *buf
17053 = cpp_alloc_token_string (reader, (const unsigned char *)ptr, len - 1);
17054 len = 0;
17055 for (unsigned ix = 0; ix != count && !sec.get_overrun (); ix++)
17056 {
17057 cpp_token *token = &macro->exp.tokens[ix];
17058 if (cpp_token_val_index (tok: token) == CPP_TOKEN_FLD_STR)
17059 {
17060 token->val.str.text = buf + len;
17061 len += token->val.str.len;
17062 if (buf[len++])
17063 sec.set_overrun ();
17064 }
17065 }
17066 }
17067
17068 if (sec.get_overrun ())
17069 return NULL;
17070 return macro;
17071}
17072
17073/* Exported macro data. */
17074struct GTY(()) macro_export {
17075 cpp_macro *def;
17076 location_t undef_loc;
17077
17078 macro_export ()
17079 :def (NULL), undef_loc (UNKNOWN_LOCATION)
17080 {
17081 }
17082};
17083
17084/* Imported macro data. */
17085class macro_import {
17086public:
17087 struct slot {
17088#if defined (WORDS_BIGENDIAN) && SIZEOF_VOID_P == 8
17089 int offset;
17090#endif
17091 /* We need to ensure we don't use the LSB for representation, as
17092 that's the union discriminator below. */
17093 unsigned bits;
17094
17095#if !(defined (WORDS_BIGENDIAN) && SIZEOF_VOID_P == 8)
17096 int offset;
17097#endif
17098
17099 public:
17100 enum Layout {
17101 L_DEF = 1,
17102 L_UNDEF = 2,
17103 L_BOTH = 3,
17104 L_MODULE_SHIFT = 2
17105 };
17106
17107 public:
17108 /* Not a regular ctor, because we put it in a union, and that's
17109 not allowed in C++ 98. */
17110 static slot ctor (unsigned module, unsigned defness)
17111 {
17112 gcc_checking_assert (defness);
17113 slot s;
17114 s.bits = defness | (module << L_MODULE_SHIFT);
17115 s.offset = -1;
17116 return s;
17117 }
17118
17119 public:
17120 unsigned get_defness () const
17121 {
17122 return bits & L_BOTH;
17123 }
17124 unsigned get_module () const
17125 {
17126 return bits >> L_MODULE_SHIFT;
17127 }
17128 void become_undef ()
17129 {
17130 bits &= ~unsigned (L_DEF);
17131 bits |= unsigned (L_UNDEF);
17132 }
17133 };
17134
17135private:
17136 typedef vec<slot, va_heap, vl_embed> ary_t;
17137 union either {
17138 /* Discriminated by bits 0|1 != 0. The expected case is that
17139 there will be exactly one slot per macro, hence the effort of
17140 packing that. */
17141 ary_t *ary;
17142 slot single;
17143 } u;
17144
17145public:
17146 macro_import ()
17147 {
17148 u.ary = NULL;
17149 }
17150
17151private:
17152 bool single_p () const
17153 {
17154 return u.single.bits & slot::L_BOTH;
17155 }
17156 bool occupied_p () const
17157 {
17158 return u.ary != NULL;
17159 }
17160
17161public:
17162 unsigned length () const
17163 {
17164 gcc_checking_assert (occupied_p ());
17165 return single_p () ? 1 : u.ary->length ();
17166 }
17167 slot &operator[] (unsigned ix)
17168 {
17169 gcc_checking_assert (occupied_p ());
17170 if (single_p ())
17171 {
17172 gcc_checking_assert (!ix);
17173 return u.single;
17174 }
17175 else
17176 return (*u.ary)[ix];
17177 }
17178
17179public:
17180 slot &exported ();
17181 slot &append (unsigned module, unsigned defness);
17182};
17183
17184/* O is a new import to append to the list for. If we're an empty
17185 set, initialize us. */
17186
17187macro_import::slot &
17188macro_import::append (unsigned module, unsigned defness)
17189{
17190 if (!occupied_p ())
17191 {
17192 u.single = slot::ctor (module, defness);
17193 return u.single;
17194 }
17195 else
17196 {
17197 bool single = single_p ();
17198 ary_t *m = single ? NULL : u.ary;
17199 vec_safe_reserve (v&: m, nelems: 1 + single);
17200 if (single)
17201 m->quick_push (obj: u.single);
17202 u.ary = m;
17203 return *u.ary->quick_push (obj: slot::ctor (module, defness));
17204 }
17205}
17206
17207/* We're going to export something. Make sure the first import slot
17208 is us. */
17209
17210macro_import::slot &
17211macro_import::exported ()
17212{
17213 if (occupied_p () && !(*this)[0].get_module ())
17214 {
17215 slot &res = (*this)[0];
17216 res.bits |= slot::L_DEF;
17217 return res;
17218 }
17219
17220 slot *a = &append (module: 0, defness: slot::L_DEF);
17221 if (!single_p ())
17222 {
17223 slot &f = (*this)[0];
17224 std::swap (a&: f, b&: *a);
17225 a = &f;
17226 }
17227 return *a;
17228}
17229
17230/* The import (&exported) macros. cpp_hasnode's deferred field
17231 indexes this array (offset by 1, so zero means 'not present'. */
17232
17233static vec<macro_import, va_heap, vl_embed> *macro_imports;
17234
17235/* The exported macros. A macro_import slot's zeroth element's offset
17236 indexes this array. If the zeroth slot is not for module zero,
17237 there is no export. */
17238
17239static GTY(()) vec<macro_export, va_gc> *macro_exports;
17240
17241/* The reachable set of header imports from this TU. */
17242
17243static GTY(()) bitmap headers;
17244
17245/* Get the (possibly empty) macro imports for NODE. */
17246
17247static macro_import &
17248get_macro_imports (cpp_hashnode *node)
17249{
17250 if (node->deferred)
17251 return (*macro_imports)[node->deferred - 1];
17252
17253 vec_safe_reserve (v&: macro_imports, nelems: 1);
17254 node->deferred = macro_imports->length () + 1;
17255 return *vec_safe_push (v&: macro_imports, obj: macro_import ());
17256}
17257
17258/* Get the macro export for export EXP of NODE. */
17259
17260static macro_export &
17261get_macro_export (macro_import::slot &slot)
17262{
17263 if (slot.offset >= 0)
17264 return (*macro_exports)[slot.offset];
17265
17266 vec_safe_reserve (v&: macro_exports, nelems: 1);
17267 slot.offset = macro_exports->length ();
17268 return *macro_exports->quick_push (obj: macro_export ());
17269}
17270
17271/* If NODE is an exportable macro, add it to the export set. */
17272
17273static int
17274maybe_add_macro (cpp_reader *, cpp_hashnode *node, void *data_)
17275{
17276 bool exporting = false;
17277
17278 if (cpp_user_macro_p (node))
17279 if (cpp_macro *macro = node->value.macro)
17280 /* Ignore imported, builtins, command line and forced header macros. */
17281 if (!macro->imported_p
17282 && !macro->lazy && macro->line >= spans.main_start ())
17283 {
17284 gcc_checking_assert (macro->kind == cmk_macro);
17285 /* I don't want to deal with this corner case, that I suspect is
17286 a devil's advocate reading of the standard. */
17287 gcc_checking_assert (!macro->extra_tokens);
17288
17289 macro_import::slot &slot = get_macro_imports (node).exported ();
17290 macro_export &exp = get_macro_export (slot);
17291 exp.def = macro;
17292 exporting = true;
17293 }
17294
17295 if (!exporting && node->deferred)
17296 {
17297 macro_import &imports = (*macro_imports)[node->deferred - 1];
17298 macro_import::slot &slot = imports[0];
17299 if (!slot.get_module ())
17300 {
17301 gcc_checking_assert (slot.get_defness ());
17302 exporting = true;
17303 }
17304 }
17305
17306 if (exporting)
17307 static_cast<vec<cpp_hashnode *> *> (data_)->safe_push (obj: node);
17308
17309 return 1; /* Don't stop. */
17310}
17311
17312/* Order cpp_hashnodes A_ and B_ by their exported macro locations. */
17313
17314static int
17315macro_loc_cmp (const void *a_, const void *b_)
17316{
17317 const cpp_hashnode *node_a = *(const cpp_hashnode *const *)a_;
17318 macro_import &import_a = (*macro_imports)[node_a->deferred - 1];
17319 const macro_export &export_a = (*macro_exports)[import_a[0].offset];
17320 location_t loc_a = export_a.def ? export_a.def->line : export_a.undef_loc;
17321
17322 const cpp_hashnode *node_b = *(const cpp_hashnode *const *)b_;
17323 macro_import &import_b = (*macro_imports)[node_b->deferred - 1];
17324 const macro_export &export_b = (*macro_exports)[import_b[0].offset];
17325 location_t loc_b = export_b.def ? export_b.def->line : export_b.undef_loc;
17326
17327 if (loc_a < loc_b)
17328 return +1;
17329 else if (loc_a > loc_b)
17330 return -1;
17331 else
17332 return 0;
17333}
17334
17335/* Gather the macro definitions and undefinitions that we will need to
17336 write out. */
17337
17338vec<cpp_hashnode *> *
17339module_state::prepare_macros (cpp_reader *reader)
17340{
17341 vec<cpp_hashnode *> *macros;
17342 vec_alloc (v&: macros, nelems: 100);
17343
17344 cpp_forall_identifiers (reader, maybe_add_macro, macros);
17345
17346 dump (dumper::MACRO) && dump ("No more than %u macros", macros->length ());
17347
17348 macros->qsort (macro_loc_cmp);
17349
17350 // Note the locations.
17351 for (unsigned ix = macros->length (); ix--;)
17352 {
17353 cpp_hashnode *node = (*macros)[ix];
17354 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
17355 macro_export &mac = (*macro_exports)[slot.offset];
17356
17357 if (IDENTIFIER_KEYWORD_P (identifier (node)))
17358 continue;
17359
17360 if (mac.undef_loc != UNKNOWN_LOCATION)
17361 note_location (loc: mac.undef_loc);
17362 if (mac.def)
17363 {
17364 note_location (loc: mac.def->line);
17365 for (unsigned ix = 0; ix != mac.def->count; ix++)
17366 note_location (loc: mac.def->exp.tokens[ix].src_loc);
17367 }
17368 }
17369
17370 return macros;
17371}
17372
17373/* Write out the exported defines. This is two sections, one
17374 containing the definitions, the other a table of node names. */
17375
17376unsigned
17377module_state::write_macros (elf_out *to, vec<cpp_hashnode *> *macros,
17378 unsigned *crc_p)
17379{
17380 dump () && dump ("Writing macros");
17381 dump.indent ();
17382
17383 /* Write the defs */
17384 bytes_out sec (to);
17385 sec.begin ();
17386
17387 unsigned count = 0;
17388 for (unsigned ix = macros->length (); ix--;)
17389 {
17390 cpp_hashnode *node = (*macros)[ix];
17391 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
17392 gcc_assert (!slot.get_module () && slot.get_defness ());
17393
17394 macro_export &mac = (*macro_exports)[slot.offset];
17395 gcc_assert (!!(slot.get_defness () & macro_import::slot::L_UNDEF)
17396 == (mac.undef_loc != UNKNOWN_LOCATION)
17397 && !!(slot.get_defness () & macro_import::slot::L_DEF)
17398 == (mac.def != NULL));
17399
17400 if (IDENTIFIER_KEYWORD_P (identifier (node)))
17401 {
17402 warning_at (mac.def->line, 0,
17403 "not exporting %<#define %E%> as it is a keyword",
17404 identifier (node));
17405 slot.offset = 0;
17406 continue;
17407 }
17408
17409 count++;
17410 slot.offset = sec.pos;
17411 dump (dumper::MACRO)
17412 && dump ("Writing macro %s%s%s %I at %u",
17413 slot.get_defness () & macro_import::slot::L_UNDEF
17414 ? "#undef" : "",
17415 slot.get_defness () == macro_import::slot::L_BOTH
17416 ? " & " : "",
17417 slot.get_defness () & macro_import::slot::L_DEF
17418 ? "#define" : "",
17419 identifier (node), slot.offset);
17420 if (mac.undef_loc != UNKNOWN_LOCATION)
17421 write_location (sec, loc: mac.undef_loc);
17422 if (mac.def)
17423 write_define (sec, macro: mac.def);
17424 }
17425 if (count)
17426 // We may have ended on a tokenless macro with a very short
17427 // location, that will cause problems reading its bit flags.
17428 sec.u (v: 0);
17429 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".def"), crc_ptr: crc_p);
17430
17431 if (count)
17432 {
17433 /* Write the table. */
17434 bytes_out sec (to);
17435 sec.begin ();
17436 sec.u (v: count);
17437
17438 for (unsigned ix = macros->length (); ix--;)
17439 {
17440 const cpp_hashnode *node = (*macros)[ix];
17441 macro_import::slot &slot = (*macro_imports)[node->deferred - 1][0];
17442
17443 if (slot.offset)
17444 {
17445 sec.cpp_node (node);
17446 sec.u (v: slot.get_defness ());
17447 sec.u (v: slot.offset);
17448 }
17449 }
17450 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".mac"), crc_ptr: crc_p);
17451 }
17452
17453 dump.outdent ();
17454 return count;
17455}
17456
17457bool
17458module_state::read_macros ()
17459{
17460 /* Get the def section. */
17461 if (!slurp->macro_defs.begin (loc, source: from (), MOD_SNAME_PFX ".def"))
17462 return false;
17463
17464 /* Get the tbl section, if there are defs. */
17465 if (slurp->macro_defs.more_p ()
17466 && !slurp->macro_tbl.begin (loc, source: from (), MOD_SNAME_PFX ".mac"))
17467 return false;
17468
17469 return true;
17470}
17471
17472/* Install the macro name table. */
17473
17474void
17475module_state::install_macros ()
17476{
17477 bytes_in &sec = slurp->macro_tbl;
17478 if (!sec.size)
17479 return;
17480
17481 dump () && dump ("Reading macro table %M", this);
17482 dump.indent ();
17483
17484 unsigned count = sec.u ();
17485 dump () && dump ("%u macros", count);
17486 while (count--)
17487 {
17488 cpp_hashnode *node = sec.cpp_node ();
17489 macro_import &imp = get_macro_imports (node);
17490 unsigned flags = sec.u () & macro_import::slot::L_BOTH;
17491 if (!flags)
17492 sec.set_overrun ();
17493
17494 if (sec.get_overrun ())
17495 break;
17496
17497 macro_import::slot &slot = imp.append (module: mod, defness: flags);
17498 slot.offset = sec.u ();
17499
17500 dump (dumper::MACRO)
17501 && dump ("Read %s macro %s%s%s %I at %u",
17502 imp.length () > 1 ? "add" : "new",
17503 flags & macro_import::slot::L_UNDEF ? "#undef" : "",
17504 flags == macro_import::slot::L_BOTH ? " & " : "",
17505 flags & macro_import::slot::L_DEF ? "#define" : "",
17506 identifier (node), slot.offset);
17507
17508 /* We'll leak an imported definition's TOKEN_FLD_STR's data
17509 here. But that only happens when we've had to resolve the
17510 deferred macro before this import -- why are you doing
17511 that? */
17512 if (cpp_macro *cur = cpp_set_deferred_macro (node))
17513 if (!cur->imported_p)
17514 {
17515 macro_import::slot &slot = imp.exported ();
17516 macro_export &exp = get_macro_export (slot);
17517 exp.def = cur;
17518 dump (dumper::MACRO)
17519 && dump ("Saving current #define %I", identifier (node));
17520 }
17521 }
17522
17523 /* We're now done with the table. */
17524 elf_in::release (self: slurp->from, bytes&: sec);
17525
17526 dump.outdent ();
17527}
17528
17529/* Import the transitive macros. */
17530
17531void
17532module_state::import_macros ()
17533{
17534 bitmap_ior_into (headers, slurp->headers);
17535
17536 bitmap_iterator bititer;
17537 unsigned bitnum;
17538 EXECUTE_IF_SET_IN_BITMAP (slurp->headers, 0, bitnum, bititer)
17539 (*modules)[bitnum]->install_macros ();
17540}
17541
17542/* NODE is being undefined at LOC. Record it in the export table, if
17543 necessary. */
17544
17545void
17546module_state::undef_macro (cpp_reader *, location_t loc, cpp_hashnode *node)
17547{
17548 if (!node->deferred)
17549 /* The macro is not imported, so our undef is irrelevant. */
17550 return;
17551
17552 unsigned n = dump.push (NULL);
17553
17554 macro_import::slot &slot = (*macro_imports)[node->deferred - 1].exported ();
17555 macro_export &exp = get_macro_export (slot);
17556
17557 exp.undef_loc = loc;
17558 slot.become_undef ();
17559 exp.def = NULL;
17560
17561 dump (dumper::MACRO) && dump ("Recording macro #undef %I", identifier (node));
17562
17563 dump.pop (n);
17564}
17565
17566/* NODE is a deferred macro node. Determine the definition and return
17567 it, with NULL if undefined. May issue diagnostics.
17568
17569 This can leak memory, when merging declarations -- the string
17570 contents (TOKEN_FLD_STR) of each definition are allocated in
17571 unreclaimable cpp objstack. Only one will win. However, I do not
17572 expect this to be common -- mostly macros have a single point of
17573 definition. Perhaps we could restore the objstack to its position
17574 after the first imported definition (if that wins)? The macros
17575 themselves are GC'd. */
17576
17577cpp_macro *
17578module_state::deferred_macro (cpp_reader *reader, location_t loc,
17579 cpp_hashnode *node)
17580{
17581 macro_import &imports = (*macro_imports)[node->deferred - 1];
17582
17583 unsigned n = dump.push (NULL);
17584 dump (dumper::MACRO) && dump ("Deferred macro %I", identifier (node));
17585
17586 bitmap visible (BITMAP_GGC_ALLOC ());
17587
17588 if (!((imports[0].get_defness () & macro_import::slot::L_UNDEF)
17589 && !imports[0].get_module ()))
17590 {
17591 /* Calculate the set of visible header imports. */
17592 bitmap_copy (visible, headers);
17593 for (unsigned ix = imports.length (); ix--;)
17594 {
17595 const macro_import::slot &slot = imports[ix];
17596 unsigned mod = slot.get_module ();
17597 if ((slot.get_defness () & macro_import::slot::L_UNDEF)
17598 && bitmap_bit_p (visible, mod))
17599 {
17600 bitmap arg = mod ? (*modules)[mod]->slurp->headers : headers;
17601 bitmap_and_compl_into (visible, arg);
17602 bitmap_set_bit (visible, mod);
17603 }
17604 }
17605 }
17606 bitmap_set_bit (visible, 0);
17607
17608 /* Now find the macros that are still visible. */
17609 bool failed = false;
17610 cpp_macro *def = NULL;
17611 vec<macro_export> defs;
17612 defs.create (nelems: imports.length ());
17613 for (unsigned ix = imports.length (); ix--;)
17614 {
17615 const macro_import::slot &slot = imports[ix];
17616 unsigned mod = slot.get_module ();
17617 if (bitmap_bit_p (visible, mod))
17618 {
17619 macro_export *pushed = NULL;
17620 if (mod)
17621 {
17622 const module_state *imp = (*modules)[mod];
17623 bytes_in &sec = imp->slurp->macro_defs;
17624 if (!sec.get_overrun ())
17625 {
17626 dump (dumper::MACRO)
17627 && dump ("Reading macro %s%s%s %I module %M at %u",
17628 slot.get_defness () & macro_import::slot::L_UNDEF
17629 ? "#undef" : "",
17630 slot.get_defness () == macro_import::slot::L_BOTH
17631 ? " & " : "",
17632 slot.get_defness () & macro_import::slot::L_DEF
17633 ? "#define" : "",
17634 identifier (node), imp, slot.offset);
17635 sec.random_access (offset: slot.offset);
17636
17637 macro_export exp;
17638 if (slot.get_defness () & macro_import::slot::L_UNDEF)
17639 exp.undef_loc = imp->read_location (sec);
17640 if (slot.get_defness () & macro_import::slot::L_DEF)
17641 exp.def = imp->read_define (sec, reader);
17642 if (sec.get_overrun ())
17643 error_at (loc, "macro definitions of %qE corrupted",
17644 imp->name);
17645 else
17646 pushed = defs.quick_push (obj: exp);
17647 }
17648 }
17649 else
17650 pushed = defs.quick_push (obj: (*macro_exports)[slot.offset]);
17651 if (pushed && pushed->def)
17652 {
17653 if (!def)
17654 def = pushed->def;
17655 else if (cpp_compare_macros (macro1: def, macro2: pushed->def))
17656 failed = true;
17657 }
17658 }
17659 }
17660
17661 if (failed)
17662 {
17663 /* If LOC is the first loc, this is the end of file check, which
17664 is a warning. */
17665 if (loc == MAP_START_LOCATION (map: LINEMAPS_ORDINARY_MAP_AT (set: line_table, index: 0)))
17666 warning_at (loc, OPT_Winvalid_imported_macros,
17667 "inconsistent imported macro definition %qE",
17668 identifier (node));
17669 else
17670 error_at (loc, "inconsistent imported macro definition %qE",
17671 identifier (node));
17672 for (unsigned ix = defs.length (); ix--;)
17673 {
17674 macro_export &exp = defs[ix];
17675 if (exp.undef_loc)
17676 inform (exp.undef_loc, "%<#undef %E%>", identifier (node));
17677 if (exp.def)
17678 inform (exp.def->line, "%<#define %s%>",
17679 cpp_macro_definition (reader, node, exp.def));
17680 }
17681 def = NULL;
17682 }
17683
17684 defs.release ();
17685
17686 dump.pop (n);
17687
17688 return def;
17689}
17690
17691/* Stream the static aggregates. Sadly some headers (ahem:
17692 iostream) contain static vars, and rely on them to run global
17693 ctors. */
17694unsigned
17695module_state::write_inits (elf_out *to, depset::hash &table, unsigned *crc_ptr)
17696{
17697 if (!static_aggregates && !tls_aggregates)
17698 return 0;
17699
17700 dump () && dump ("Writing initializers");
17701 dump.indent ();
17702
17703 static_aggregates = nreverse (static_aggregates);
17704 tls_aggregates = nreverse (tls_aggregates);
17705
17706 unsigned count = 0;
17707 trees_out sec (to, this, table, ~0u);
17708 sec.begin ();
17709
17710 tree list = static_aggregates;
17711 for (int passes = 0; passes != 2; passes++)
17712 {
17713 for (tree init = list; init; init = TREE_CHAIN (init))
17714 if (TREE_LANG_FLAG_0 (init))
17715 {
17716 tree decl = TREE_VALUE (init);
17717
17718 dump ("Initializer:%u for %N", count, decl);
17719 sec.tree_node (t: decl);
17720 ++count;
17721 }
17722
17723 list = tls_aggregates;
17724 }
17725
17726 sec.end (sink: to, name: to->name (MOD_SNAME_PFX ".ini"), crc_ptr);
17727 dump.outdent ();
17728
17729 return count;
17730}
17731
17732/* We have to defer some post-load processing until we've completed
17733 reading, because they can cause more reading. */
17734
17735static void
17736post_load_processing ()
17737{
17738 /* We mustn't cause a GC, our caller should have arranged for that
17739 not to happen. */
17740 gcc_checking_assert (function_depth);
17741
17742 if (!post_load_decls)
17743 return;
17744
17745 tree old_cfd = current_function_decl;
17746 struct function *old_cfun = cfun;
17747 while (post_load_decls->length ())
17748 {
17749 tree decl = post_load_decls->pop ();
17750
17751 dump () && dump ("Post-load processing of %N", decl);
17752
17753 gcc_checking_assert (DECL_ABSTRACT_P (decl));
17754 /* Cloning can cause loading -- specifically operator delete for
17755 the deleting dtor. */
17756 maybe_clone_body (decl);
17757 }
17758
17759 cfun = old_cfun;
17760 current_function_decl = old_cfd;
17761}
17762
17763bool
17764module_state::read_inits (unsigned count)
17765{
17766 trees_in sec (this);
17767 if (!sec.begin (loc, source: from (), snum: from ()->find (MOD_SNAME_PFX ".ini")))
17768 return false;
17769 dump () && dump ("Reading %u initializers", count);
17770 dump.indent ();
17771
17772 lazy_snum = ~0u;
17773 for (unsigned ix = 0; ix != count; ix++)
17774 {
17775 /* Merely referencing the decl causes its initializer to be read
17776 and added to the correct list. */
17777 tree decl = sec.tree_node ();
17778
17779 if (sec.get_overrun ())
17780 break;
17781 if (decl)
17782 dump ("Initializer:%u for %N", count, decl);
17783 }
17784 lazy_snum = 0;
17785 post_load_processing ();
17786 dump.outdent ();
17787 if (!sec.end (src: from ()))
17788 return false;
17789 return true;
17790}
17791
17792void
17793module_state::write_counts (elf_out *to, unsigned counts[MSC_HWM],
17794 unsigned *crc_ptr)
17795{
17796 bytes_out cfg (to);
17797
17798 cfg.begin ();
17799
17800 for (unsigned ix = MSC_HWM; ix--;)
17801 cfg.u (v: counts[ix]);
17802
17803 if (dump ())
17804 {
17805 dump ("Cluster sections are [%u,%u)",
17806 counts[MSC_sec_lwm], counts[MSC_sec_hwm]);
17807 dump ("Bindings %u", counts[MSC_bindings]);
17808 dump ("Pendings %u", counts[MSC_pendings]);
17809 dump ("Entities %u", counts[MSC_entities]);
17810 dump ("Namespaces %u", counts[MSC_namespaces]);
17811 dump ("Macros %u", counts[MSC_macros]);
17812 dump ("Initializers %u", counts[MSC_inits]);
17813 }
17814
17815 cfg.end (sink: to, name: to->name (MOD_SNAME_PFX ".cnt"), crc_ptr);
17816}
17817
17818bool
17819module_state::read_counts (unsigned counts[MSC_HWM])
17820{
17821 bytes_in cfg;
17822
17823 if (!cfg.begin (loc, source: from (), MOD_SNAME_PFX ".cnt"))
17824 return false;
17825
17826 for (unsigned ix = MSC_HWM; ix--;)
17827 counts[ix] = cfg.u ();
17828
17829 if (dump ())
17830 {
17831 dump ("Declaration sections are [%u,%u)",
17832 counts[MSC_sec_lwm], counts[MSC_sec_hwm]);
17833 dump ("Bindings %u", counts[MSC_bindings]);
17834 dump ("Pendings %u", counts[MSC_pendings]);
17835 dump ("Entities %u", counts[MSC_entities]);
17836 dump ("Namespaces %u", counts[MSC_namespaces]);
17837 dump ("Macros %u", counts[MSC_macros]);
17838 dump ("Initializers %u", counts[MSC_inits]);
17839 }
17840
17841 return cfg.end (src: from ());
17842}
17843
17844/* Tool configuration: MOD_SNAME_PFX .config
17845
17846 This is data that confirms current state (or fails). */
17847
17848void
17849module_state::write_config (elf_out *to, module_state_config &config,
17850 unsigned inner_crc)
17851{
17852 bytes_out cfg (to);
17853
17854 cfg.begin ();
17855
17856 /* Write version and inner crc as u32 values, for easier
17857 debug inspection. */
17858 dump () && dump ("Writing version=%V, inner_crc=%x",
17859 MODULE_VERSION, inner_crc);
17860 cfg.u32 (val: unsigned (MODULE_VERSION));
17861 cfg.u32 (val: inner_crc);
17862
17863 cfg.u (v: to->name (literal: is_header () ? "" : get_flatname ()));
17864
17865 /* Configuration. */
17866 dump () && dump ("Writing target='%s', host='%s'",
17867 TARGET_MACHINE, HOST_MACHINE);
17868 unsigned target = to->name (TARGET_MACHINE);
17869 unsigned host = (!strcmp (TARGET_MACHINE, HOST_MACHINE)
17870 ? target : to->name (HOST_MACHINE));
17871 cfg.u (v: target);
17872 cfg.u (v: host);
17873
17874 cfg.str (ptr: config.dialect_str);
17875 cfg.u (v: extensions);
17876
17877 /* Global tree information. We write the globals crc separately,
17878 rather than mix it directly into the overall crc, as it is used
17879 to ensure data match between instances of the compiler, not
17880 integrity of the file. */
17881 dump () && dump ("Writing globals=%u, crc=%x",
17882 fixed_trees->length (), global_crc);
17883 cfg.u (v: fixed_trees->length ());
17884 cfg.u32 (val: global_crc);
17885
17886 if (is_partition ())
17887 cfg.u (v: is_interface ());
17888
17889 cfg.u (v: config.num_imports);
17890 cfg.u (v: config.num_partitions);
17891 cfg.u (v: config.num_entities);
17892
17893 cfg.u (v: config.ordinary_locs);
17894 cfg.u (v: config.macro_locs);
17895 cfg.u (v: config.loc_range_bits);
17896
17897 cfg.u (v: config.active_init);
17898
17899 /* Now generate CRC, we'll have incorporated the inner CRC because
17900 of its serialization above. */
17901 cfg.end (sink: to, name: to->name (MOD_SNAME_PFX ".cfg"), crc_ptr: &crc);
17902 dump () && dump ("Writing CRC=%x", crc);
17903}
17904
17905void
17906module_state::note_cmi_name ()
17907{
17908 if (!cmi_noted_p && filename)
17909 {
17910 cmi_noted_p = true;
17911 inform (loc, "compiled module file is %qs",
17912 maybe_add_cmi_prefix (to: filename));
17913 }
17914}
17915
17916bool
17917module_state::read_config (module_state_config &config)
17918{
17919 bytes_in cfg;
17920
17921 if (!cfg.begin (loc, source: from (), MOD_SNAME_PFX ".cfg"))
17922 return false;
17923
17924 /* Check version. */
17925 unsigned my_ver = MODULE_VERSION;
17926 unsigned their_ver = cfg.u32 ();
17927 dump () && dump (my_ver == their_ver ? "Version %V"
17928 : "Expecting %V found %V", my_ver, their_ver);
17929 if (their_ver != my_ver)
17930 {
17931 /* The compiler versions differ. Close enough? */
17932 verstr_t my_string, their_string;
17933
17934 version2string (version: my_ver, out&: my_string);
17935 version2string (version: their_ver, out&: their_string);
17936
17937 /* Reject when either is non-experimental or when experimental
17938 major versions differ. */
17939 bool reject_p = ((!IS_EXPERIMENTAL (my_ver)
17940 || !IS_EXPERIMENTAL (their_ver)
17941 || MODULE_MAJOR (my_ver) != MODULE_MAJOR (their_ver))
17942 /* The 'I know what I'm doing' switch. */
17943 && !flag_module_version_ignore);
17944 bool inform_p = true;
17945 if (reject_p)
17946 {
17947 cfg.set_overrun ();
17948 error_at (loc, "compiled module is %sversion %s",
17949 IS_EXPERIMENTAL (their_ver) ? "experimental " : "",
17950 their_string);
17951 }
17952 else
17953 inform_p = warning_at (loc, 0, "compiled module is %sversion %s",
17954 IS_EXPERIMENTAL (their_ver) ? "experimental " : "",
17955 their_string);
17956
17957 if (inform_p)
17958 {
17959 inform (loc, "compiler is %sversion %s%s%s",
17960 IS_EXPERIMENTAL (my_ver) ? "experimental " : "",
17961 my_string,
17962 reject_p ? "" : flag_module_version_ignore
17963 ? ", be it on your own head!" : ", close enough?",
17964 reject_p ? "" : " \xc2\xaf\\_(\xe3\x83\x84)_/\xc2\xaf");
17965 note_cmi_name ();
17966 }
17967
17968 if (reject_p)
17969 goto done;
17970 }
17971
17972 /* We wrote the inner crc merely to merge it, so simply read it
17973 back and forget it. */
17974 cfg.u32 ();
17975
17976 /* Check module name. */
17977 {
17978 const char *their_name = from ()->name (offset: cfg.u ());
17979 const char *our_name = "";
17980
17981 if (!is_header ())
17982 our_name = get_flatname ();
17983
17984 /* Header units can be aliased, so name checking is
17985 inappropriate. */
17986 if (0 != strcmp (s1: their_name, s2: our_name))
17987 {
17988 error_at (loc,
17989 their_name[0] && our_name[0] ? G_("module %qs found")
17990 : their_name[0]
17991 ? G_("header module expected, module %qs found")
17992 : G_("module %qs expected, header module found"),
17993 their_name[0] ? their_name : our_name);
17994 cfg.set_overrun ();
17995 goto done;
17996 }
17997 }
17998
17999 /* Check the CRC after the above sanity checks, so that the user is
18000 clued in. */
18001 {
18002 unsigned e_crc = crc;
18003 crc = cfg.get_crc ();
18004 dump () && dump ("Reading CRC=%x", crc);
18005 if (!is_direct () && crc != e_crc)
18006 {
18007 error_at (loc, "module %qs CRC mismatch", get_flatname ());
18008 cfg.set_overrun ();
18009 goto done;
18010 }
18011 }
18012
18013 /* Check target & host. */
18014 {
18015 const char *their_target = from ()->name (offset: cfg.u ());
18016 const char *their_host = from ()->name (offset: cfg.u ());
18017 dump () && dump ("Read target='%s', host='%s'", their_target, their_host);
18018 if (strcmp (their_target, TARGET_MACHINE)
18019 || strcmp (their_host, HOST_MACHINE))
18020 {
18021 error_at (loc, "target & host is %qs:%qs, expected %qs:%qs",
18022 their_target, TARGET_MACHINE, their_host, HOST_MACHINE);
18023 cfg.set_overrun ();
18024 goto done;
18025 }
18026 }
18027
18028 /* Check compilation dialect. This must match. */
18029 {
18030 const char *their_dialect = cfg.str ();
18031 if (strcmp (s1: their_dialect, s2: config.dialect_str))
18032 {
18033 error_at (loc, "language dialect differs %qs, expected %qs",
18034 their_dialect, config.dialect_str);
18035 cfg.set_overrun ();
18036 goto done;
18037 }
18038 }
18039
18040 /* Check for extensions. If they set any, we must have them set
18041 too. */
18042 {
18043 unsigned ext = cfg.u ();
18044 unsigned allowed = (flag_openmp ? SE_OPENMP : 0);
18045
18046 if (unsigned bad = ext & ~allowed)
18047 {
18048 if (bad & SE_OPENMP)
18049 error_at (loc, "module contains OpenMP, use %<-fopenmp%> to enable");
18050 cfg.set_overrun ();
18051 goto done;
18052 }
18053 extensions = ext;
18054 }
18055
18056 /* Check global trees. */
18057 {
18058 unsigned their_fixed_length = cfg.u ();
18059 unsigned their_fixed_crc = cfg.u32 ();
18060 dump () && dump ("Read globals=%u, crc=%x",
18061 their_fixed_length, their_fixed_crc);
18062 if (!flag_preprocess_only
18063 && (their_fixed_length != fixed_trees->length ()
18064 || their_fixed_crc != global_crc))
18065 {
18066 error_at (loc, "fixed tree mismatch");
18067 cfg.set_overrun ();
18068 goto done;
18069 }
18070 }
18071
18072 /* All non-partitions are interfaces. */
18073 interface_p = !is_partition () || cfg.u ();
18074
18075 config.num_imports = cfg.u ();
18076 config.num_partitions = cfg.u ();
18077 config.num_entities = cfg.u ();
18078
18079 config.ordinary_locs = cfg.u ();
18080 config.macro_locs = cfg.u ();
18081 config.loc_range_bits = cfg.u ();
18082
18083 config.active_init = cfg.u ();
18084
18085 done:
18086 return cfg.end (src: from ());
18087}
18088
18089/* Comparator for ordering the Ordered Ordinary Location array. */
18090
18091static int
18092ool_cmp (const void *a_, const void *b_)
18093{
18094 auto *a = *static_cast<const module_state *const *> (a_);
18095 auto *b = *static_cast<const module_state *const *> (b_);
18096 if (a == b)
18097 return 0;
18098 else if (a->ordinary_locs.first < b->ordinary_locs.first)
18099 return -1;
18100 else
18101 return +1;
18102}
18103
18104/* Use ELROND format to record the following sections:
18105 qualified-names : binding value(s)
18106 MOD_SNAME_PFX.README : human readable, strings
18107 MOD_SNAME_PFX.ENV : environment strings, strings
18108 MOD_SNAME_PFX.nms : namespace hierarchy
18109 MOD_SNAME_PFX.bnd : binding table
18110 MOD_SNAME_PFX.spc : specialization table
18111 MOD_SNAME_PFX.imp : import table
18112 MOD_SNAME_PFX.ent : entity table
18113 MOD_SNAME_PFX.prt : partitions table
18114 MOD_SNAME_PFX.olm : ordinary line maps
18115 MOD_SNAME_PFX.mlm : macro line maps
18116 MOD_SNAME_PFX.def : macro definitions
18117 MOD_SNAME_PFX.mac : macro index
18118 MOD_SNAME_PFX.ini : inits
18119 MOD_SNAME_PFX.cnt : counts
18120 MOD_SNAME_PFX.cfg : config data
18121*/
18122
18123void
18124module_state::write_begin (elf_out *to, cpp_reader *reader,
18125 module_state_config &config, unsigned &crc)
18126{
18127 /* Figure out remapped module numbers, which might elide
18128 partitions. */
18129 bitmap partitions = NULL;
18130 if (!is_header () && !is_partition ())
18131 partitions = BITMAP_GGC_ALLOC ();
18132 write_init_maps ();
18133
18134 unsigned mod_hwm = 1;
18135 for (unsigned ix = 1; ix != modules->length (); ix++)
18136 {
18137 module_state *imp = (*modules)[ix];
18138
18139 /* Promote any non-partition direct import from a partition, unless
18140 we're a partition. */
18141 if (!is_partition () && !imp->is_partition ()
18142 && imp->is_partition_direct ())
18143 imp->directness = MD_PURVIEW_DIRECT;
18144
18145 /* Write any import that is not a partition, unless we're a
18146 partition. */
18147 if (!partitions || !imp->is_partition ())
18148 imp->remap = mod_hwm++;
18149 else
18150 {
18151 dump () && dump ("Partition %M %u", imp, ix);
18152 bitmap_set_bit (partitions, ix);
18153 imp->remap = 0;
18154 /* All interface partitions must be exported. */
18155 if (imp->is_interface () && !bitmap_bit_p (exports, imp->mod))
18156 {
18157 error_at (imp->loc, "interface partition is not exported");
18158 bitmap_set_bit (exports, imp->mod);
18159 }
18160
18161 /* All the partition entities should have been loaded when
18162 loading the partition. */
18163 if (CHECKING_P)
18164 for (unsigned jx = 0; jx != imp->entity_num; jx++)
18165 {
18166 binding_slot *slot = &(*entity_ary)[imp->entity_lwm + jx];
18167 gcc_checking_assert (!slot->is_lazy ());
18168 }
18169 }
18170
18171 if (imp->is_direct () && (imp->remap || imp->is_partition ()))
18172 note_location (loc: imp->imported_from ());
18173 }
18174
18175 if (partitions && bitmap_empty_p (map: partitions))
18176 /* No partitions present. */
18177 partitions = nullptr;
18178
18179 /* Find the set of decls we must write out. */
18180 depset::hash table (DECL_NAMESPACE_BINDINGS (global_namespace)->size () * 8);
18181 /* Add the specializations before the writables, so that we can
18182 detect injected friend specializations. */
18183 table.add_specializations (decl_p: true);
18184 table.add_specializations (decl_p: false);
18185 if (partial_specializations)
18186 {
18187 table.add_partial_entities (partial_classes: partial_specializations);
18188 partial_specializations = NULL;
18189 }
18190 table.add_namespace_entities (global_namespace, partitions);
18191 if (class_members)
18192 {
18193 table.add_class_entities (class_members);
18194 class_members = NULL;
18195 }
18196
18197 /* Now join everything up. */
18198 table.find_dependencies (module: this);
18199
18200 if (!table.finalize_dependencies ())
18201 {
18202 to->set_error ();
18203 return;
18204 }
18205
18206#if CHECKING_P
18207 /* We're done verifying at-most once reading, reset to verify
18208 at-most once writing. */
18209 note_defs = note_defs_table_t::create_ggc (n: 1000);
18210#endif
18211
18212 /* Determine Strongy Connected Components. */
18213 vec<depset *> sccs = table.connect ();
18214
18215 vec_alloc (v&: ool, nelems: modules->length ());
18216 for (unsigned ix = modules->length (); --ix;)
18217 {
18218 auto *import = (*modules)[ix];
18219 if (import->loadedness > ML_NONE
18220 && !(partitions && bitmap_bit_p (partitions, import->mod)))
18221 ool->quick_push (obj: import);
18222 }
18223 ool->qsort (ool_cmp);
18224
18225 vec<cpp_hashnode *> *macros = nullptr;
18226 if (is_header ())
18227 macros = prepare_macros (reader);
18228
18229 config.num_imports = mod_hwm;
18230 config.num_partitions = modules->length () - mod_hwm;
18231 auto map_info = write_prepare_maps (cfg: &config, has_partitions: bool (config.num_partitions));
18232 unsigned counts[MSC_HWM];
18233 memset (s: counts, c: 0, n: sizeof (counts));
18234
18235 /* depset::cluster is the cluster number,
18236 depset::section is unspecified scratch value.
18237
18238 The following loops make use of the tarjan property that
18239 dependencies will be earlier in the SCCS array. */
18240
18241 /* This first loop determines the number of depsets in each SCC, and
18242 also the number of namespaces we're dealing with. During the
18243 loop, the meaning of a couple of depset fields now change:
18244
18245 depset::cluster -> size_of cluster, if first of cluster & !namespace
18246 depset::section -> section number of cluster (if !namespace). */
18247
18248 unsigned n_spaces = 0;
18249 counts[MSC_sec_lwm] = counts[MSC_sec_hwm] = to->get_section_limit ();
18250 for (unsigned size, ix = 0; ix < sccs.length (); ix += size)
18251 {
18252 depset **base = &sccs[ix];
18253
18254 if (base[0]->get_entity_kind () == depset::EK_NAMESPACE)
18255 {
18256 n_spaces++;
18257 size = 1;
18258 }
18259 else
18260 {
18261 /* Count the members in this cluster. */
18262 for (size = 1; ix + size < sccs.length (); size++)
18263 if (base[size]->cluster != base[0]->cluster)
18264 break;
18265
18266 for (unsigned jx = 0; jx != size; jx++)
18267 {
18268 /* Set the section number. */
18269 base[jx]->cluster = ~(~0u >> 1); /* A bad value. */
18270 base[jx]->section = counts[MSC_sec_hwm];
18271 }
18272
18273 /* Save the size in the first member's cluster slot. */
18274 base[0]->cluster = size;
18275
18276 counts[MSC_sec_hwm]++;
18277 }
18278 }
18279
18280 /* Write the clusters. Namespace decls are put in the spaces array.
18281 The meaning of depset::cluster changes to provide the
18282 unnamed-decl count of the depset's decl (and remains zero for
18283 non-decls and non-unnamed). */
18284 unsigned bytes = 0;
18285 vec<depset *> spaces;
18286 spaces.create (nelems: n_spaces);
18287
18288 for (unsigned size, ix = 0; ix < sccs.length (); ix += size)
18289 {
18290 depset **base = &sccs[ix];
18291
18292 if (base[0]->get_entity_kind () == depset::EK_NAMESPACE)
18293 {
18294 tree decl = base[0]->get_entity ();
18295 if (decl == global_namespace)
18296 base[0]->cluster = 0;
18297 else if (!base[0]->is_import ())
18298 {
18299 base[0]->cluster = counts[MSC_entities]++;
18300 spaces.quick_push (obj: base[0]);
18301 counts[MSC_namespaces]++;
18302 if (CHECKING_P)
18303 {
18304 /* Add it to the entity map, such that we can tell it is
18305 part of us. */
18306 bool existed;
18307 unsigned *slot = &entity_map->get_or_insert
18308 (DECL_UID (decl), existed: &existed);
18309 if (existed)
18310 /* It must have come from a partition. */
18311 gcc_checking_assert
18312 (import_entity_module (*slot)->is_partition ());
18313 *slot = ~base[0]->cluster;
18314 }
18315 dump (dumper::CLUSTER) && dump ("Cluster namespace %N", decl);
18316 }
18317 size = 1;
18318 }
18319 else
18320 {
18321 size = base[0]->cluster;
18322
18323 /* Cluster is now used to number entities. */
18324 base[0]->cluster = ~(~0u >> 1); /* A bad value. */
18325
18326 sort_cluster (original: &table, scc: base, size);
18327
18328 /* Record the section for consistency checking during stream
18329 out -- we don't want to start writing decls in different
18330 sections. */
18331 table.section = base[0]->section;
18332 bytes += write_cluster (to, scc: base, size, table, counts, crc_ptr: &crc);
18333 table.section = 0;
18334 }
18335 }
18336
18337 /* depset::cluster - entity number (on entities)
18338 depset::section - cluster number */
18339 /* We'd better have written as many sections and found as many
18340 namespaces as we predicted. */
18341 gcc_assert (counts[MSC_sec_hwm] == to->get_section_limit ()
18342 && spaces.length () == counts[MSC_namespaces]);
18343
18344 /* Write the entitites. None happens if we contain namespaces or
18345 nothing. */
18346 config.num_entities = counts[MSC_entities];
18347 if (counts[MSC_entities])
18348 write_entities (to, depsets: sccs, count: counts[MSC_entities], crc_p: &crc);
18349
18350 /* Write the namespaces. */
18351 if (counts[MSC_namespaces])
18352 write_namespaces (to, spaces, num: counts[MSC_namespaces], crc_p: &crc);
18353
18354 /* Write the bindings themselves. */
18355 counts[MSC_bindings] = write_bindings (to, sccs, crc_p: &crc);
18356
18357 /* Write the unnamed. */
18358 counts[MSC_pendings] = write_pendings (to, depsets: sccs, table, crc_p: &crc);
18359
18360 /* Write the import table. */
18361 if (config.num_imports > 1)
18362 write_imports (to, crc_ptr: &crc);
18363
18364 /* Write elided partition table. */
18365 if (config.num_partitions)
18366 write_partitions (to, count: config.num_partitions, crc_ptr: &crc);
18367
18368 /* Write the line maps. */
18369 if (config.ordinary_locs)
18370 write_ordinary_maps (to, info&: map_info, has_partitions: bool (config.num_partitions), crc_p: &crc);
18371 if (config.macro_locs)
18372 write_macro_maps (to, info&: map_info, crc_p: &crc);
18373
18374 if (is_header ())
18375 {
18376 counts[MSC_macros] = write_macros (to, macros, crc_p: &crc);
18377 counts[MSC_inits] = write_inits (to, table, crc_ptr: &crc);
18378 vec_free (v&: macros);
18379 }
18380
18381 unsigned clusters = counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
18382 dump () && dump ("Wrote %u clusters, average %u bytes/cluster",
18383 clusters, (bytes + clusters / 2) / (clusters + !clusters));
18384 trees_out::instrument ();
18385
18386 write_counts (to, counts, crc_ptr: &crc);
18387
18388 spaces.release ();
18389 sccs.release ();
18390
18391 vec_free (v&: macro_loc_remap);
18392 vec_free (v&: ord_loc_remap);
18393 vec_free (v&: ool);
18394
18395 // FIXME:QOI: Have a command line switch to control more detailed
18396 // information (which might leak data you do not want to leak).
18397 // Perhaps (some of) the write_readme contents should also be
18398 // so-controlled.
18399 if (false)
18400 write_env (to);
18401}
18402
18403// Finish module writing after we've emitted all dynamic initializers.
18404
18405void
18406module_state::write_end (elf_out *to, cpp_reader *reader,
18407 module_state_config &config, unsigned &crc)
18408{
18409 /* And finish up. */
18410 write_config (to, config, inner_crc: crc);
18411
18412 /* Human-readable info. */
18413 write_readme (to, reader, dialect: config.dialect_str);
18414
18415 dump () && dump ("Wrote %u sections", to->get_section_limit ());
18416}
18417
18418/* Initial read of a CMI. Checks config, loads up imports and line
18419 maps. */
18420
18421bool
18422module_state::read_initial (cpp_reader *reader)
18423{
18424 module_state_config config;
18425 bool ok = true;
18426
18427 if (ok && !from ()->begin (loc))
18428 ok = false;
18429
18430 if (ok && !read_config (config))
18431 ok = false;
18432
18433 bool have_locs = ok && read_prepare_maps (cfg: &config);
18434
18435 /* Ordinary maps before the imports. */
18436 if (!(have_locs && config.ordinary_locs))
18437 ordinary_locs.first = line_table->highest_location + 1;
18438 else if (!read_ordinary_maps (num_ord_locs: config.ordinary_locs, range_bits: config.loc_range_bits))
18439 ok = false;
18440
18441 /* Allocate the REMAP vector. */
18442 slurp->alloc_remap (size: config.num_imports);
18443
18444 if (ok)
18445 {
18446 /* Read the import table. Decrement current to stop this CMI
18447 from being evicted during the import. */
18448 slurp->current--;
18449 if (config.num_imports > 1 && !read_imports (reader, lmaps: line_table))
18450 ok = false;
18451 slurp->current++;
18452 }
18453
18454 /* Read the elided partition table, if we're the primary partition. */
18455 if (ok && config.num_partitions && is_module ()
18456 && !read_partitions (count: config.num_partitions))
18457 ok = false;
18458
18459 /* Determine the module's number. */
18460 gcc_checking_assert (mod == MODULE_UNKNOWN);
18461 gcc_checking_assert (this != (*modules)[0]);
18462
18463 {
18464 /* Allocate space in the entities array now -- that array must be
18465 monotonically in step with the modules array. */
18466 entity_lwm = vec_safe_length (v: entity_ary);
18467 entity_num = config.num_entities;
18468 gcc_checking_assert (modules->length () == 1
18469 || modules->last ()->entity_lwm <= entity_lwm);
18470 vec_safe_reserve (v&: entity_ary, nelems: config.num_entities);
18471
18472 binding_slot slot;
18473 slot.u.binding = NULL_TREE;
18474 for (unsigned count = config.num_entities; count--;)
18475 entity_ary->quick_push (obj: slot);
18476 }
18477
18478 /* We'll run out of other resources before we run out of module
18479 indices. */
18480 mod = modules->length ();
18481 vec_safe_push (v&: modules, obj: this);
18482
18483 /* We always import and export ourselves. */
18484 bitmap_set_bit (imports, mod);
18485 bitmap_set_bit (exports, mod);
18486
18487 if (ok)
18488 (*slurp->remap)[0] = mod << 1;
18489 dump () && dump ("Assigning %M module number %u", this, mod);
18490
18491 /* We should not have been frozen during the importing done by
18492 read_config. */
18493 gcc_assert (!from ()->is_frozen ());
18494
18495 /* Macro maps after the imports. */
18496 if (!(ok && have_locs && config.macro_locs))
18497 macro_locs.first = LINEMAPS_MACRO_LOWEST_LOCATION (set: line_table);
18498 else if (!read_macro_maps (num_macro_locs: config.macro_locs))
18499 ok = false;
18500
18501 /* Note whether there's an active initializer. */
18502 active_init_p = !is_header () && bool (config.active_init);
18503
18504 gcc_assert (slurp->current == ~0u);
18505 return ok;
18506}
18507
18508/* Read a preprocessor state. */
18509
18510bool
18511module_state::read_preprocessor (bool outermost)
18512{
18513 gcc_checking_assert (is_header () && slurp
18514 && slurp->remap_module (0) == mod);
18515
18516 if (loadedness == ML_PREPROCESSOR)
18517 return !(from () && from ()->get_error ());
18518
18519 bool ok = true;
18520
18521 /* Read direct header imports. */
18522 unsigned len = slurp->remap->length ();
18523 for (unsigned ix = 1; ok && ix != len; ix++)
18524 {
18525 unsigned map = (*slurp->remap)[ix];
18526 if (map & 1)
18527 {
18528 module_state *import = (*modules)[map >> 1];
18529 if (import->is_header ())
18530 {
18531 ok = import->read_preprocessor (outermost: false);
18532 bitmap_ior_into (slurp->headers, import->slurp->headers);
18533 }
18534 }
18535 }
18536
18537 /* Record as a direct header. */
18538 if (ok)
18539 bitmap_set_bit (slurp->headers, mod);
18540
18541 if (ok && !read_macros ())
18542 ok = false;
18543
18544 loadedness = ML_PREPROCESSOR;
18545 announce (what: "macros");
18546
18547 if (flag_preprocess_only)
18548 /* We're done with the string table. */
18549 from ()->release ();
18550
18551 return check_read (outermost, ok);
18552}
18553
18554/* Read language state. */
18555
18556bool
18557module_state::read_language (bool outermost)
18558{
18559 gcc_checking_assert (!lazy_snum);
18560
18561 if (loadedness == ML_LANGUAGE)
18562 return !(slurp && from () && from ()->get_error ());
18563
18564 gcc_checking_assert (slurp && slurp->current == ~0u
18565 && slurp->remap_module (0) == mod);
18566
18567 bool ok = true;
18568
18569 /* Read direct imports. */
18570 unsigned len = slurp->remap->length ();
18571 for (unsigned ix = 1; ok && ix != len; ix++)
18572 {
18573 unsigned map = (*slurp->remap)[ix];
18574 if (map & 1)
18575 {
18576 module_state *import = (*modules)[map >> 1];
18577 if (!import->read_language (outermost: false))
18578 ok = false;
18579 }
18580 }
18581
18582 unsigned counts[MSC_HWM];
18583
18584 if (ok && !read_counts (counts))
18585 ok = false;
18586
18587 function_depth++; /* Prevent unexpected GCs. */
18588
18589 if (ok && counts[MSC_entities] != entity_num)
18590 ok = false;
18591 if (ok && counts[MSC_entities]
18592 && !read_entities (count: counts[MSC_entities],
18593 lwm: counts[MSC_sec_lwm], hwm: counts[MSC_sec_hwm]))
18594 ok = false;
18595
18596 /* Read the namespace hierarchy. */
18597 if (ok && counts[MSC_namespaces]
18598 && !read_namespaces (num: counts[MSC_namespaces]))
18599 ok = false;
18600
18601 if (ok && !read_bindings (num: counts[MSC_bindings],
18602 lwm: counts[MSC_sec_lwm], hwm: counts[MSC_sec_hwm]))
18603 ok = false;
18604
18605 /* And unnamed. */
18606 if (ok && counts[MSC_pendings] && !read_pendings (count: counts[MSC_pendings]))
18607 ok = false;
18608
18609 if (ok)
18610 {
18611 slurp->remaining = counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
18612 available_clusters += counts[MSC_sec_hwm] - counts[MSC_sec_lwm];
18613 }
18614
18615 if (!flag_module_lazy
18616 || (is_partition ()
18617 && module_interface_p ()
18618 && !module_partition_p ()))
18619 {
18620 /* Read the sections in forward order, so that dependencies are read
18621 first. See note about tarjan_connect. */
18622 ggc_collect ();
18623
18624 lazy_snum = ~0u;
18625
18626 unsigned hwm = counts[MSC_sec_hwm];
18627 for (unsigned ix = counts[MSC_sec_lwm]; ok && ix != hwm; ix++)
18628 if (!load_section (snum: ix, NULL))
18629 {
18630 ok = false;
18631 break;
18632 }
18633 lazy_snum = 0;
18634 post_load_processing ();
18635
18636 ggc_collect ();
18637
18638 if (ok && CHECKING_P)
18639 for (unsigned ix = 0; ix != entity_num; ix++)
18640 gcc_assert (!(*entity_ary)[ix + entity_lwm].is_lazy ());
18641 }
18642
18643 // If the import is a header-unit, we need to register initializers
18644 // of any static objects it contains (looking at you _Ioinit).
18645 // Notice, the ordering of these initializers will be that of a
18646 // dynamic initializer at this point in the current TU. (Other
18647 // instances of these objects in other TUs will be initialized as
18648 // part of that TU's global initializers.)
18649 if (ok && counts[MSC_inits] && !read_inits (count: counts[MSC_inits]))
18650 ok = false;
18651
18652 function_depth--;
18653
18654 announce (flag_module_lazy ? "lazy" : "imported");
18655 loadedness = ML_LANGUAGE;
18656
18657 gcc_assert (slurp->current == ~0u);
18658
18659 /* We're done with the string table. */
18660 from ()->release ();
18661
18662 return check_read (outermost, ok);
18663}
18664
18665bool
18666module_state::maybe_defrost ()
18667{
18668 bool ok = true;
18669 if (from ()->is_frozen ())
18670 {
18671 if (lazy_open >= lazy_limit)
18672 freeze_an_elf ();
18673 dump () && dump ("Defrosting '%s'", filename);
18674 ok = from ()->defrost (name: maybe_add_cmi_prefix (to: filename));
18675 lazy_open++;
18676 }
18677
18678 return ok;
18679}
18680
18681/* Load section SNUM, dealing with laziness. It doesn't matter if we
18682 have multiple concurrent loads, because we do not use TREE_VISITED
18683 when reading back in. */
18684
18685bool
18686module_state::load_section (unsigned snum, binding_slot *mslot)
18687{
18688 if (from ()->get_error ())
18689 return false;
18690
18691 if (snum >= slurp->current)
18692 from ()->set_error (elf::E_BAD_LAZY);
18693 else if (maybe_defrost ())
18694 {
18695 unsigned old_current = slurp->current;
18696 slurp->current = snum;
18697 slurp->lru = 0; /* Do not swap out. */
18698 slurp->remaining--;
18699 read_cluster (snum);
18700 slurp->lru = ++lazy_lru;
18701 slurp->current = old_current;
18702 }
18703
18704 if (mslot && mslot->is_lazy ())
18705 {
18706 /* Oops, the section didn't set this slot. */
18707 from ()->set_error (elf::E_BAD_DATA);
18708 *mslot = NULL_TREE;
18709 }
18710
18711 bool ok = !from ()->get_error ();
18712 if (!ok)
18713 {
18714 error_at (loc, "failed to read compiled module cluster %u: %s",
18715 snum, from ()->get_error (name: filename));
18716 note_cmi_name ();
18717 }
18718
18719 maybe_completed_reading ();
18720
18721 return ok;
18722}
18723
18724void
18725module_state::maybe_completed_reading ()
18726{
18727 if (loadedness == ML_LANGUAGE && slurp->current == ~0u && !slurp->remaining)
18728 {
18729 lazy_open--;
18730 /* We no longer need the macros, all tokenizing has been done. */
18731 slurp->release_macros ();
18732
18733 from ()->end ();
18734 slurp->close ();
18735 slurped ();
18736 }
18737}
18738
18739/* After a reading operation, make sure things are still ok. If not,
18740 emit an error and clean up. */
18741
18742bool
18743module_state::check_read (bool outermost, bool ok)
18744{
18745 gcc_checking_assert (!outermost || slurp->current == ~0u);
18746
18747 if (!ok)
18748 from ()->set_error ();
18749
18750 if (int e = from ()->get_error ())
18751 {
18752 error_at (loc, "failed to read compiled module: %s",
18753 from ()->get_error (name: filename));
18754 note_cmi_name ();
18755
18756 if (e == EMFILE
18757 || e == ENFILE
18758#if MAPPED_READING
18759 || e == ENOMEM
18760#endif
18761 || false)
18762 inform (loc, "consider using %<-fno-module-lazy%>,"
18763 " increasing %<-param-lazy-modules=%u%> value,"
18764 " or increasing the per-process file descriptor limit",
18765 param_lazy_modules);
18766 else if (e == ENOENT)
18767 inform (loc, "imports must be built before being imported");
18768
18769 if (outermost)
18770 fatal_error (loc, "returning to the gate for a mechanical issue");
18771
18772 ok = false;
18773 }
18774
18775 maybe_completed_reading ();
18776
18777 return ok;
18778}
18779
18780/* Return the IDENTIFIER_NODE naming module IX. This is the name
18781 including dots. */
18782
18783char const *
18784module_name (unsigned ix, bool header_ok)
18785{
18786 if (modules)
18787 {
18788 module_state *imp = (*modules)[ix];
18789
18790 if (ix && !imp->name)
18791 imp = imp->parent;
18792
18793 if (header_ok || !imp->is_header ())
18794 return imp->get_flatname ();
18795 }
18796
18797 return NULL;
18798}
18799
18800/* Return the bitmap describing what modules are imported. Remember,
18801 we always import ourselves. */
18802
18803bitmap
18804get_import_bitmap ()
18805{
18806 return (*modules)[0]->imports;
18807}
18808
18809/* Return the visible imports and path of instantiation for an
18810 instantiation at TINST. If TINST is nullptr, we're not in an
18811 instantiation, and thus will return the visible imports of the
18812 current TU (and NULL *PATH_MAP_P). We cache the information on
18813 the tinst level itself. */
18814
18815static bitmap
18816path_of_instantiation (tinst_level *tinst, bitmap *path_map_p)
18817{
18818 gcc_checking_assert (modules_p ());
18819
18820 if (!tinst)
18821 {
18822 /* Not inside an instantiation, just the regular case. */
18823 *path_map_p = nullptr;
18824 return get_import_bitmap ();
18825 }
18826
18827 if (!tinst->path)
18828 {
18829 /* Calculate. */
18830 bitmap visible = path_of_instantiation (tinst: tinst->next, path_map_p);
18831 bitmap path_map = *path_map_p;
18832
18833 if (!path_map)
18834 {
18835 path_map = BITMAP_GGC_ALLOC ();
18836 bitmap_set_bit (path_map, 0);
18837 }
18838
18839 tree decl = tinst->tldcl;
18840 if (TREE_CODE (decl) == TREE_LIST)
18841 decl = TREE_PURPOSE (decl);
18842 if (TYPE_P (decl))
18843 decl = TYPE_NAME (decl);
18844
18845 if (unsigned mod = get_originating_module (decl))
18846 if (!bitmap_bit_p (path_map, mod))
18847 {
18848 /* This is brand new information! */
18849 bitmap new_path = BITMAP_GGC_ALLOC ();
18850 bitmap_copy (new_path, path_map);
18851 bitmap_set_bit (new_path, mod);
18852 path_map = new_path;
18853
18854 bitmap imports = (*modules)[mod]->imports;
18855 if (bitmap_intersect_compl_p (imports, visible))
18856 {
18857 /* IMPORTS contains additional modules to VISIBLE. */
18858 bitmap new_visible = BITMAP_GGC_ALLOC ();
18859
18860 bitmap_ior (new_visible, visible, imports);
18861 visible = new_visible;
18862 }
18863 }
18864
18865 tinst->path = path_map;
18866 tinst->visible = visible;
18867 }
18868
18869 *path_map_p = tinst->path;
18870 return tinst->visible;
18871}
18872
18873/* Return the bitmap describing what modules are visible along the
18874 path of instantiation. If we're not an instantiation, this will be
18875 the visible imports of the TU. *PATH_MAP_P is filled in with the
18876 modules owning the instantiation path -- we see the module-linkage
18877 entities of those modules. */
18878
18879bitmap
18880visible_instantiation_path (bitmap *path_map_p)
18881{
18882 if (!modules_p ())
18883 return NULL;
18884
18885 return path_of_instantiation (tinst: current_instantiation (), path_map_p);
18886}
18887
18888/* We've just directly imported IMPORT. Update our import/export
18889 bitmaps. IS_EXPORT is true if we're reexporting the OTHER. */
18890
18891void
18892module_state::set_import (module_state const *import, bool is_export)
18893{
18894 gcc_checking_assert (this != import);
18895
18896 /* We see IMPORT's exports (which includes IMPORT). If IMPORT is
18897 the primary interface or a partition we'll see its imports. */
18898 bitmap_ior_into (imports, import->is_module () || import->is_partition ()
18899 ? import->imports : import->exports);
18900
18901 if (is_export)
18902 /* We'll export OTHER's exports. */
18903 bitmap_ior_into (exports, import->exports);
18904}
18905
18906/* Return the declaring entity of DECL. That is the decl determining
18907 how to decorate DECL with module information. Returns NULL_TREE if
18908 it's the global module. */
18909
18910tree
18911get_originating_module_decl (tree decl)
18912{
18913 /* An enumeration constant. */
18914 if (TREE_CODE (decl) == CONST_DECL
18915 && DECL_CONTEXT (decl)
18916 && (TREE_CODE (DECL_CONTEXT (decl)) == ENUMERAL_TYPE))
18917 decl = TYPE_NAME (DECL_CONTEXT (decl));
18918 else if (TREE_CODE (decl) == FIELD_DECL
18919 || TREE_CODE (decl) == USING_DECL
18920 || CONST_DECL_USING_P (decl))
18921 {
18922 decl = DECL_CONTEXT (decl);
18923 if (TREE_CODE (decl) != FUNCTION_DECL)
18924 decl = TYPE_NAME (decl);
18925 }
18926
18927 gcc_checking_assert (TREE_CODE (decl) == TEMPLATE_DECL
18928 || TREE_CODE (decl) == FUNCTION_DECL
18929 || TREE_CODE (decl) == TYPE_DECL
18930 || TREE_CODE (decl) == VAR_DECL
18931 || TREE_CODE (decl) == CONCEPT_DECL
18932 || TREE_CODE (decl) == NAMESPACE_DECL);
18933
18934 for (;;)
18935 {
18936 /* Uninstantiated template friends are owned by the befriending
18937 class -- not their context. */
18938 if (TREE_CODE (decl) == TEMPLATE_DECL
18939 && DECL_UNINSTANTIATED_TEMPLATE_FRIEND_P (decl))
18940 decl = TYPE_NAME (DECL_CHAIN (decl));
18941
18942 int use;
18943 if (tree ti = node_template_info (decl, use))
18944 {
18945 decl = TI_TEMPLATE (ti);
18946 if (TREE_CODE (decl) != TEMPLATE_DECL)
18947 {
18948 /* A friend template specialization. */
18949 gcc_checking_assert (OVL_P (decl));
18950 return global_namespace;
18951 }
18952 }
18953 else
18954 {
18955 tree ctx = CP_DECL_CONTEXT (decl);
18956 if (TREE_CODE (ctx) == NAMESPACE_DECL)
18957 break;
18958
18959 if (TYPE_P (ctx))
18960 {
18961 ctx = TYPE_NAME (ctx);
18962 if (!ctx)
18963 {
18964 /* Some kind of internal type. */
18965 gcc_checking_assert (DECL_ARTIFICIAL (decl));
18966 return global_namespace;
18967 }
18968 }
18969 decl = ctx;
18970 }
18971 }
18972
18973 return decl;
18974}
18975
18976int
18977get_originating_module (tree decl, bool for_mangle)
18978{
18979 tree owner = get_originating_module_decl (decl);
18980 tree not_tmpl = STRIP_TEMPLATE (owner);
18981
18982 if (!DECL_LANG_SPECIFIC (not_tmpl))
18983 return for_mangle ? -1 : 0;
18984
18985 if (for_mangle && !DECL_MODULE_ATTACH_P (not_tmpl))
18986 return -1;
18987
18988 int mod = !DECL_MODULE_IMPORT_P (not_tmpl) ? 0 : get_importing_module (owner);
18989 gcc_checking_assert (!for_mangle || !(*modules)[mod]->is_header ());
18990 return mod;
18991}
18992
18993unsigned
18994get_importing_module (tree decl, bool flexible)
18995{
18996 unsigned index = import_entity_index (decl, null_ok: flexible);
18997 if (index == ~(~0u >> 1))
18998 return -1;
18999 module_state *module = import_entity_module (index);
19000
19001 return module->mod;
19002}
19003
19004/* Is it permissible to redeclare DECL. */
19005
19006bool
19007module_may_redeclare (tree decl)
19008{
19009 for (;;)
19010 {
19011 tree ctx = CP_DECL_CONTEXT (decl);
19012 if (TREE_CODE (ctx) == NAMESPACE_DECL)
19013 // Found the namespace-scope decl.
19014 break;
19015 if (!CLASS_TYPE_P (ctx))
19016 // We've met a non-class scope. Such a thing is not
19017 // reopenable, so we must be ok.
19018 return true;
19019 decl = TYPE_NAME (ctx);
19020 }
19021
19022 tree not_tmpl = STRIP_TEMPLATE (decl);
19023
19024 int use_tpl = 0;
19025 if (node_template_info (decl: not_tmpl, use&: use_tpl) && use_tpl)
19026 // Specializations of any kind can be redeclared anywhere.
19027 // FIXME: Should we be checking this in more places on the scope chain?
19028 return true;
19029
19030 if (!DECL_LANG_SPECIFIC (not_tmpl) || !DECL_MODULE_ATTACH_P (not_tmpl))
19031 // Decl is attached to global module. Current scope needs to be too.
19032 return !module_attach_p ();
19033
19034 module_state *me = (*modules)[0];
19035 module_state *them = me;
19036
19037 if (DECL_LANG_SPECIFIC (not_tmpl) && DECL_MODULE_IMPORT_P (not_tmpl))
19038 {
19039 /* We can be given the TEMPLATE_RESULT. We want the
19040 TEMPLATE_DECL. */
19041 int use_tpl = -1;
19042 if (tree ti = node_template_info (decl, use&: use_tpl))
19043 {
19044 tree tmpl = TI_TEMPLATE (ti);
19045 if (use_tpl == 2)
19046 {
19047 /* A partial specialization. Find that specialization's
19048 template_decl. */
19049 for (tree list = DECL_TEMPLATE_SPECIALIZATIONS (tmpl);
19050 list; list = TREE_CHAIN (list))
19051 if (DECL_TEMPLATE_RESULT (TREE_VALUE (list)) == decl)
19052 {
19053 decl = TREE_VALUE (list);
19054 break;
19055 }
19056 }
19057 else if (DECL_TEMPLATE_RESULT (tmpl) == decl)
19058 decl = tmpl;
19059 }
19060 unsigned index = import_entity_index (decl);
19061 them = import_entity_module (index);
19062 }
19063
19064 // Decl is attached to named module. Current scope needs to be
19065 // attaching to the same module.
19066 if (!module_attach_p ())
19067 return false;
19068
19069 // Both attached to named module.
19070 if (me == them)
19071 return true;
19072
19073 return me && get_primary (parent: them) == get_primary (parent: me);
19074}
19075
19076/* DECL is being created by this TU. Record it came from here. We
19077 record module purview, so we can see if partial or explicit
19078 specialization needs to be written out, even though its purviewness
19079 comes from the most general template. */
19080
19081void
19082set_instantiating_module (tree decl)
19083{
19084 gcc_assert (TREE_CODE (decl) == FUNCTION_DECL
19085 || VAR_P (decl)
19086 || TREE_CODE (decl) == TYPE_DECL
19087 || TREE_CODE (decl) == CONCEPT_DECL
19088 || TREE_CODE (decl) == TEMPLATE_DECL
19089 || (TREE_CODE (decl) == NAMESPACE_DECL
19090 && DECL_NAMESPACE_ALIAS (decl)));
19091
19092 if (!modules_p ())
19093 return;
19094
19095 decl = STRIP_TEMPLATE (decl);
19096
19097 if (!DECL_LANG_SPECIFIC (decl) && module_purview_p ())
19098 retrofit_lang_decl (decl);
19099
19100 if (DECL_LANG_SPECIFIC (decl))
19101 {
19102 DECL_MODULE_PURVIEW_P (decl) = module_purview_p ();
19103 /* If this was imported, we'll still be in the entity_hash. */
19104 DECL_MODULE_IMPORT_P (decl) = false;
19105 }
19106}
19107
19108/* If DECL is a class member, whose class is not defined in this TU
19109 (it was imported), remember this decl. */
19110
19111void
19112set_defining_module (tree decl)
19113{
19114 gcc_checking_assert (!DECL_LANG_SPECIFIC (decl)
19115 || !DECL_MODULE_IMPORT_P (decl));
19116
19117 if (module_p ())
19118 {
19119 /* We need to track all declarations within a module, not just those
19120 in the module purview, because we don't necessarily know yet if
19121 this module will require a CMI while in the global fragment. */
19122 tree ctx = DECL_CONTEXT (decl);
19123 if (ctx
19124 && (TREE_CODE (ctx) == RECORD_TYPE || TREE_CODE (ctx) == UNION_TYPE)
19125 && DECL_LANG_SPECIFIC (TYPE_NAME (ctx))
19126 && DECL_MODULE_IMPORT_P (TYPE_NAME (ctx)))
19127 {
19128 /* This entity's context is from an import. We may need to
19129 record this entity to make sure we emit it in the CMI.
19130 Template specializations are in the template hash tables,
19131 so we don't need to record them here as well. */
19132 int use_tpl = -1;
19133 tree ti = node_template_info (decl, use&: use_tpl);
19134 if (use_tpl <= 0)
19135 {
19136 if (ti)
19137 {
19138 gcc_checking_assert (!use_tpl);
19139 /* Get to the TEMPLATE_DECL. */
19140 decl = TI_TEMPLATE (ti);
19141 }
19142
19143 /* Record it on the class_members list. */
19144 vec_safe_push (v&: class_members, obj: decl);
19145 }
19146 }
19147 else if (DECL_IMPLICIT_TYPEDEF_P (decl)
19148 && CLASSTYPE_TEMPLATE_SPECIALIZATION (TREE_TYPE (decl)))
19149 /* This is a partial or explicit specialization. */
19150 vec_safe_push (v&: partial_specializations, obj: decl);
19151 }
19152}
19153
19154void
19155set_originating_module (tree decl, bool friend_p ATTRIBUTE_UNUSED)
19156{
19157 set_instantiating_module (decl);
19158
19159 if (!DECL_NAMESPACE_SCOPE_P (decl))
19160 return;
19161
19162 gcc_checking_assert (friend_p || decl == get_originating_module_decl (decl));
19163
19164 if (module_attach_p ())
19165 {
19166 retrofit_lang_decl (decl);
19167 DECL_MODULE_ATTACH_P (decl) = true;
19168 }
19169
19170 if (!module_exporting_p ())
19171 return;
19172
19173 // FIXME: Check ill-formed linkage
19174 DECL_MODULE_EXPORT_P (decl) = true;
19175}
19176
19177/* DECL is keyed to CTX for odr purposes. */
19178
19179void
19180maybe_key_decl (tree ctx, tree decl)
19181{
19182 if (!modules_p ())
19183 return;
19184
19185 /* We only need to deal with lambdas attached to var, field,
19186 parm, or type decls. */
19187 if (TREE_CODE (ctx) != VAR_DECL
19188 && TREE_CODE (ctx) != FIELD_DECL
19189 && TREE_CODE (ctx) != PARM_DECL
19190 && TREE_CODE (ctx) != TYPE_DECL)
19191 return;
19192
19193 /* For fields, key it to the containing type to handle deduplication
19194 correctly. */
19195 if (TREE_CODE (ctx) == FIELD_DECL)
19196 ctx = TYPE_NAME (DECL_CONTEXT (ctx));
19197
19198 if (!keyed_table)
19199 keyed_table = new keyed_map_t (EXPERIMENT (1, 400));
19200
19201 auto &vec = keyed_table->get_or_insert (k: ctx);
19202 if (!vec.length ())
19203 {
19204 retrofit_lang_decl (ctx);
19205 DECL_MODULE_KEYED_DECLS_P (ctx) = true;
19206 }
19207 vec.safe_push (obj: decl);
19208}
19209
19210/* Create the flat name string. It is simplest to have it handy. */
19211
19212void
19213module_state::set_flatname ()
19214{
19215 gcc_checking_assert (!flatname);
19216 if (parent)
19217 {
19218 auto_vec<tree,5> ids;
19219 size_t len = 0;
19220 char const *primary = NULL;
19221 size_t pfx_len = 0;
19222
19223 for (module_state *probe = this;
19224 probe;
19225 probe = probe->parent)
19226 if (is_partition () && !probe->is_partition ())
19227 {
19228 primary = probe->get_flatname ();
19229 pfx_len = strlen (s: primary);
19230 break;
19231 }
19232 else
19233 {
19234 ids.safe_push (obj: probe->name);
19235 len += IDENTIFIER_LENGTH (probe->name) + 1;
19236 }
19237
19238 char *flat = XNEWVEC (char, pfx_len + len + is_partition ());
19239 flatname = flat;
19240
19241 if (primary)
19242 {
19243 memcpy (dest: flat, src: primary, n: pfx_len);
19244 flat += pfx_len;
19245 *flat++ = ':';
19246 }
19247
19248 for (unsigned len = 0; ids.length ();)
19249 {
19250 if (len)
19251 flat[len++] = '.';
19252 tree elt = ids.pop ();
19253 unsigned l = IDENTIFIER_LENGTH (elt);
19254 memcpy (dest: flat + len, IDENTIFIER_POINTER (elt), n: l + 1);
19255 len += l;
19256 }
19257 }
19258 else if (is_header ())
19259 flatname = TREE_STRING_POINTER (name);
19260 else
19261 flatname = IDENTIFIER_POINTER (name);
19262}
19263
19264/* Read the CMI file for a module. */
19265
19266bool
19267module_state::do_import (cpp_reader *reader, bool outermost)
19268{
19269 gcc_assert (global_namespace == current_scope () && loadedness == ML_NONE);
19270
19271 loc = linemap_module_loc (line_table, from: loc, name: get_flatname ());
19272
19273 if (lazy_open >= lazy_limit)
19274 freeze_an_elf ();
19275
19276 int fd = -1;
19277 int e = ENOENT;
19278 if (filename)
19279 {
19280 const char *file = maybe_add_cmi_prefix (to: filename);
19281 dump () && dump ("CMI is %s", file);
19282 if (note_module_cmi_yes || inform_cmi_p)
19283 inform (loc, "reading CMI %qs", file);
19284 /* Add the CMI file to the dependency tracking. */
19285 if (cpp_get_deps (reader))
19286 deps_add_dep (cpp_get_deps (reader), file);
19287 fd = open (file: file, O_RDONLY | O_CLOEXEC | O_BINARY);
19288 e = errno;
19289 }
19290
19291 gcc_checking_assert (!slurp);
19292 slurp = new slurping (new elf_in (fd, e));
19293
19294 bool ok = true;
19295 if (!from ()->get_error ())
19296 {
19297 announce (what: "importing");
19298 loadedness = ML_CONFIG;
19299 lazy_open++;
19300 ok = read_initial (reader);
19301 slurp->lru = ++lazy_lru;
19302 }
19303
19304 gcc_assert (slurp->current == ~0u);
19305
19306 return check_read (outermost, ok);
19307}
19308
19309/* Attempt to increase the file descriptor limit. */
19310
19311static bool
19312try_increase_lazy (unsigned want)
19313{
19314 gcc_checking_assert (lazy_open >= lazy_limit);
19315
19316 /* If we're increasing, saturate at hard limit. */
19317 if (want > lazy_hard_limit && lazy_limit < lazy_hard_limit)
19318 want = lazy_hard_limit;
19319
19320#if HAVE_SETRLIMIT
19321 if ((!lazy_limit || !param_lazy_modules)
19322 && lazy_hard_limit
19323 && want <= lazy_hard_limit)
19324 {
19325 struct rlimit rlimit;
19326 rlimit.rlim_cur = want + LAZY_HEADROOM;
19327 rlimit.rlim_max = lazy_hard_limit + LAZY_HEADROOM;
19328 if (!setrlimit (RLIMIT_NOFILE, rlimits: &rlimit))
19329 lazy_limit = want;
19330 }
19331#endif
19332
19333 return lazy_open < lazy_limit;
19334}
19335
19336/* Pick a victim module to freeze its reader. */
19337
19338void
19339module_state::freeze_an_elf ()
19340{
19341 if (try_increase_lazy (want: lazy_open * 2))
19342 return;
19343
19344 module_state *victim = NULL;
19345 for (unsigned ix = modules->length (); ix--;)
19346 {
19347 module_state *candidate = (*modules)[ix];
19348 if (candidate && candidate->slurp && candidate->slurp->lru
19349 && candidate->from ()->is_freezable ()
19350 && (!victim || victim->slurp->lru > candidate->slurp->lru))
19351 victim = candidate;
19352 }
19353
19354 if (victim)
19355 {
19356 dump () && dump ("Freezing '%s'", victim->filename);
19357 if (victim->slurp->macro_defs.size)
19358 /* Save the macro definitions to a buffer. */
19359 victim->from ()->preserve (bytes&: victim->slurp->macro_defs);
19360 if (victim->slurp->macro_tbl.size)
19361 /* Save the macro definitions to a buffer. */
19362 victim->from ()->preserve (bytes&: victim->slurp->macro_tbl);
19363 victim->from ()->freeze ();
19364 lazy_open--;
19365 }
19366 else
19367 dump () && dump ("No module available for freezing");
19368}
19369
19370/* Load the lazy slot *MSLOT, INDEX'th slot of the module. */
19371
19372bool
19373module_state::lazy_load (unsigned index, binding_slot *mslot)
19374{
19375 unsigned n = dump.push (m: this);
19376
19377 gcc_checking_assert (function_depth);
19378
19379 unsigned cookie = mslot->get_lazy ();
19380 unsigned snum = cookie >> 2;
19381 dump () && dump ("Loading entity %M[%u] section:%u", this, index, snum);
19382
19383 bool ok = load_section (snum, mslot);
19384
19385 dump.pop (n);
19386
19387 return ok;
19388}
19389
19390/* Load MOD's binding for NS::ID into *MSLOT. *MSLOT contains the
19391 lazy cookie. OUTER is true if this is the outermost lazy, (used
19392 for diagnostics). */
19393
19394void
19395lazy_load_binding (unsigned mod, tree ns, tree id, binding_slot *mslot)
19396{
19397 int count = errorcount + warningcount;
19398
19399 timevar_start (TV_MODULE_IMPORT);
19400
19401 /* Make sure lazy loading from a template context behaves as if
19402 from a non-template context. */
19403 processing_template_decl_sentinel ptds;
19404
19405 /* Stop GC happening, even in outermost loads (because our caller
19406 could well be building up a lookup set). */
19407 function_depth++;
19408
19409 gcc_checking_assert (mod);
19410 module_state *module = (*modules)[mod];
19411 unsigned n = dump.push (m: module);
19412
19413 unsigned snum = mslot->get_lazy ();
19414 dump () && dump ("Lazily binding %P@%N section:%u", ns, id,
19415 module->name, snum);
19416
19417 bool ok = !recursive_lazy (snum);
19418 if (ok)
19419 {
19420 ok = module->load_section (snum, mslot);
19421 lazy_snum = 0;
19422 post_load_processing ();
19423 }
19424
19425 dump.pop (n);
19426
19427 function_depth--;
19428
19429 timevar_stop (TV_MODULE_IMPORT);
19430
19431 if (!ok)
19432 fatal_error (input_location,
19433 module->is_header ()
19434 ? G_("failed to load binding %<%E%s%E%>")
19435 : G_("failed to load binding %<%E%s%E@%s%>"),
19436 ns, &"::"[ns == global_namespace ? 2 : 0], id,
19437 module->get_flatname ());
19438
19439 if (count != errorcount + warningcount)
19440 inform (input_location,
19441 module->is_header ()
19442 ? G_("during load of binding %<%E%s%E%>")
19443 : G_("during load of binding %<%E%s%E@%s%>"),
19444 ns, &"::"[ns == global_namespace ? 2 : 0], id,
19445 module->get_flatname ());
19446}
19447
19448/* Load any pending entities keyed to the top-key of DECL. */
19449
19450void
19451lazy_load_pendings (tree decl)
19452{
19453 /* Make sure lazy loading from a template context behaves as if
19454 from a non-template context. */
19455 processing_template_decl_sentinel ptds;
19456
19457 tree key_decl;
19458 pending_key key;
19459 key.ns = find_pending_key (decl, decl_p: &key_decl);
19460 key.id = DECL_NAME (key_decl);
19461
19462 auto *pending_vec = pending_table ? pending_table->get (k: key) : nullptr;
19463 if (!pending_vec)
19464 return;
19465
19466 int count = errorcount + warningcount;
19467
19468 timevar_start (TV_MODULE_IMPORT);
19469 bool ok = !recursive_lazy ();
19470 if (ok)
19471 {
19472 function_depth++; /* Prevent GC */
19473 unsigned n = dump.push (NULL);
19474 dump () && dump ("Reading %u pending entities keyed to %P",
19475 pending_vec->length (), key.ns, key.id);
19476 for (unsigned ix = pending_vec->length (); ix--;)
19477 {
19478 unsigned index = (*pending_vec)[ix];
19479 binding_slot *slot = &(*entity_ary)[index];
19480
19481 if (slot->is_lazy ())
19482 {
19483 module_state *import = import_entity_module (index);
19484 if (!import->lazy_load (index: index - import->entity_lwm, mslot: slot))
19485 ok = false;
19486 }
19487 else if (dump ())
19488 {
19489 module_state *import = import_entity_module (index);
19490 dump () && dump ("Entity %M[%u] already loaded",
19491 import, index - import->entity_lwm);
19492 }
19493 }
19494
19495 pending_table->remove (k: key);
19496 dump.pop (n);
19497 lazy_snum = 0;
19498 post_load_processing ();
19499 function_depth--;
19500 }
19501
19502 timevar_stop (TV_MODULE_IMPORT);
19503
19504 if (!ok)
19505 fatal_error (input_location, "failed to load pendings for %<%E%s%E%>",
19506 key.ns, &"::"[key.ns == global_namespace ? 2 : 0], key.id);
19507
19508 if (count != errorcount + warningcount)
19509 inform (input_location, "during load of pendings for %<%E%s%E%>",
19510 key.ns, &"::"[key.ns == global_namespace ? 2 : 0], key.id);
19511}
19512
19513static void
19514direct_import (module_state *import, cpp_reader *reader)
19515{
19516 timevar_start (TV_MODULE_IMPORT);
19517 unsigned n = dump.push (m: import);
19518
19519 gcc_checking_assert (import->is_direct () && import->has_location ());
19520 if (import->loadedness == ML_NONE)
19521 if (!import->do_import (reader, outermost: true))
19522 gcc_unreachable ();
19523
19524 if (import->loadedness < ML_LANGUAGE)
19525 {
19526 if (!keyed_table)
19527 keyed_table = new keyed_map_t (EXPERIMENT (1, 400));
19528 import->read_language (outermost: true);
19529 }
19530
19531 (*modules)[0]->set_import (import, is_export: import->exported_p);
19532
19533 dump.pop (n);
19534 timevar_stop (TV_MODULE_IMPORT);
19535}
19536
19537/* Import module IMPORT. */
19538
19539void
19540import_module (module_state *import, location_t from_loc, bool exporting_p,
19541 tree, cpp_reader *reader)
19542{
19543 if (!import->check_not_purview (from: from_loc))
19544 return;
19545
19546 if (!import->is_header () && current_lang_depth ())
19547 /* Only header units should appear inside language
19548 specifications. The std doesn't specify this, but I think
19549 that's an error in resolving US 033, because language linkage
19550 is also our escape clause to getting things into the global
19551 module, so we don't want to confuse things by having to think
19552 about whether 'extern "C++" { import foo; }' puts foo's
19553 contents into the global module all of a sudden. */
19554 warning (0, "import of named module %qs inside language-linkage block",
19555 import->get_flatname ());
19556
19557 if (exporting_p || module_exporting_p ())
19558 import->exported_p = true;
19559
19560 if (import->loadedness != ML_NONE)
19561 {
19562 from_loc = ordinary_loc_of (lmaps: line_table, from: from_loc);
19563 linemap_module_reparent (line_table, loc: import->loc, new_parent: from_loc);
19564 }
19565 gcc_checking_assert (!import->module_p);
19566 gcc_checking_assert (import->is_direct () && import->has_location ());
19567
19568 direct_import (import, reader);
19569}
19570
19571/* Declare the name of the current module to be NAME. EXPORTING_p is
19572 true if this TU is the exporting module unit. */
19573
19574void
19575declare_module (module_state *module, location_t from_loc, bool exporting_p,
19576 tree, cpp_reader *reader)
19577{
19578 gcc_assert (global_namespace == current_scope ());
19579
19580 module_state *current = (*modules)[0];
19581 if (module_purview_p () || module->loadedness > ML_CONFIG)
19582 {
19583 error_at (from_loc, module_purview_p ()
19584 ? G_("module already declared")
19585 : G_("module already imported"));
19586 if (module_purview_p ())
19587 module = current;
19588 inform (module->loc, module_purview_p ()
19589 ? G_("module %qs declared here")
19590 : G_("module %qs imported here"),
19591 module->get_flatname ());
19592 return;
19593 }
19594
19595 gcc_checking_assert (module->module_p);
19596 gcc_checking_assert (module->is_direct () && module->has_location ());
19597
19598 /* Yer a module, 'arry. */
19599 module_kind = module->is_header () ? MK_HEADER : MK_NAMED | MK_ATTACH;
19600
19601 // Even in header units, we consider the decls to be purview
19602 module_kind |= MK_PURVIEW;
19603
19604 if (module->is_partition ())
19605 module_kind |= MK_PARTITION;
19606 if (exporting_p)
19607 {
19608 module->interface_p = true;
19609 module_kind |= MK_INTERFACE;
19610 }
19611
19612 if (module_has_cmi_p ())
19613 {
19614 /* Copy the importing information we may have already done. We
19615 do not need to separate out the imports that only happen in
19616 the GMF, inspite of what the literal wording of the std
19617 might imply. See p2191, the core list had a discussion
19618 where the module implementors agreed that the GMF of a named
19619 module is invisible to importers. */
19620 module->imports = current->imports;
19621
19622 module->mod = 0;
19623 (*modules)[0] = module;
19624 }
19625 else
19626 {
19627 module->interface_p = true;
19628 current->parent = module; /* So mangler knows module identity. */
19629 direct_import (import: module, reader);
19630 }
19631}
19632
19633/* Return true IFF we must emit a module global initializer function
19634 (which will be called by importers' init code). */
19635
19636bool
19637module_global_init_needed ()
19638{
19639 return module_has_cmi_p () && !header_module_p ();
19640}
19641
19642/* Calculate which, if any, import initializers need calling. */
19643
19644bool
19645module_determine_import_inits ()
19646{
19647 if (!modules || header_module_p ())
19648 return false;
19649
19650 /* Prune active_init_p. We need the same bitmap allocation
19651 scheme as for the imports member. */
19652 function_depth++; /* Disable GC. */
19653 bitmap covered_imports (BITMAP_GGC_ALLOC ());
19654
19655 bool any = false;
19656
19657 /* Because indirect imports are before their direct import, and
19658 we're scanning the array backwards, we only need one pass! */
19659 for (unsigned ix = modules->length (); --ix;)
19660 {
19661 module_state *import = (*modules)[ix];
19662
19663 if (!import->active_init_p)
19664 ;
19665 else if (bitmap_bit_p (covered_imports, ix))
19666 import->active_init_p = false;
19667 else
19668 {
19669 /* Everything this imports is therefore handled by its
19670 initializer, so doesn't need initializing by us. */
19671 bitmap_ior_into (covered_imports, import->imports);
19672 any = true;
19673 }
19674 }
19675 function_depth--;
19676
19677 return any;
19678}
19679
19680/* Emit calls to each direct import's global initializer. Including
19681 direct imports of directly imported header units. The initializers
19682 of (static) entities in header units will be called by their
19683 importing modules (for the instance contained within that), or by
19684 the current TU (for the instances we've brought in). Of course
19685 such header unit behaviour is evil, but iostream went through that
19686 door some time ago. */
19687
19688void
19689module_add_import_initializers ()
19690{
19691 if (!modules || header_module_p ())
19692 return;
19693
19694 tree fntype = build_function_type (void_type_node, void_list_node);
19695 releasing_vec args; // There are no args
19696
19697 for (unsigned ix = modules->length (); --ix;)
19698 {
19699 module_state *import = (*modules)[ix];
19700 if (import->active_init_p)
19701 {
19702 tree name = mangle_module_global_init (ix);
19703 tree fndecl = build_lang_decl (FUNCTION_DECL, name, fntype);
19704
19705 DECL_CONTEXT (fndecl) = FROB_CONTEXT (global_namespace);
19706 SET_DECL_ASSEMBLER_NAME (fndecl, name);
19707 TREE_PUBLIC (fndecl) = true;
19708 determine_visibility (fndecl);
19709
19710 tree call = cp_build_function_call_vec (fndecl, &args,
19711 tf_warning_or_error);
19712 finish_expr_stmt (call);
19713 }
19714 }
19715}
19716
19717/* NAME & LEN are a preprocessed header name, possibly including the
19718 surrounding "" or <> characters. Return the raw string name of the
19719 module to which it refers. This will be an absolute path, or begin
19720 with ./, so it is immediately distinguishable from a (non-header
19721 unit) module name. If READER is non-null, ask the preprocessor to
19722 locate the header to which it refers using the appropriate include
19723 path. Note that we do never do \ processing of the string, as that
19724 matches the preprocessor's behaviour. */
19725
19726static const char *
19727canonicalize_header_name (cpp_reader *reader, location_t loc, bool unquoted,
19728 const char *str, size_t &len_r)
19729{
19730 size_t len = len_r;
19731 static char *buf = 0;
19732 static size_t alloc = 0;
19733
19734 if (!unquoted)
19735 {
19736 gcc_checking_assert (len >= 2
19737 && ((reader && str[0] == '<' && str[len-1] == '>')
19738 || (str[0] == '"' && str[len-1] == '"')));
19739 str += 1;
19740 len -= 2;
19741 }
19742
19743 if (reader)
19744 {
19745 gcc_assert (!unquoted);
19746
19747 if (len >= alloc)
19748 {
19749 alloc = len + 1;
19750 buf = XRESIZEVEC (char, buf, alloc);
19751 }
19752 memcpy (dest: buf, src: str, n: len);
19753 buf[len] = 0;
19754
19755 if (const char *hdr
19756 = cpp_probe_header_unit (reader, file: buf, angle_p: str[-1] == '<', loc))
19757 {
19758 len = strlen (s: hdr);
19759 str = hdr;
19760 }
19761 else
19762 str = buf;
19763 }
19764
19765 if (!(str[0] == '.' ? IS_DIR_SEPARATOR (str[1]) : IS_ABSOLUTE_PATH (str)))
19766 {
19767 /* Prepend './' */
19768 if (len + 3 > alloc)
19769 {
19770 alloc = len + 3;
19771 buf = XRESIZEVEC (char, buf, alloc);
19772 }
19773
19774 buf[0] = '.';
19775 buf[1] = DIR_SEPARATOR;
19776 memmove (dest: buf + 2, src: str, n: len);
19777 len += 2;
19778 buf[len] = 0;
19779 str = buf;
19780 }
19781
19782 len_r = len;
19783 return str;
19784}
19785
19786/* Set the CMI name from a cody packet. Issue an error if
19787 ill-formed. */
19788
19789void module_state::set_filename (const Cody::Packet &packet)
19790{
19791 gcc_checking_assert (!filename);
19792 if (packet.GetCode () == Cody::Client::PC_PATHNAME)
19793 filename = xstrdup (packet.GetString ().c_str ());
19794 else
19795 {
19796 gcc_checking_assert (packet.GetCode () == Cody::Client::PC_ERROR);
19797 error_at (loc, "unknown Compiled Module Interface: %s",
19798 packet.GetString ().c_str ());
19799 }
19800}
19801
19802/* Figure out whether to treat HEADER as an include or an import. */
19803
19804static char *
19805maybe_translate_include (cpp_reader *reader, line_maps *lmaps, location_t loc,
19806 const char *path)
19807{
19808 if (!modules_p ())
19809 {
19810 /* Turn off. */
19811 cpp_get_callbacks (reader)->translate_include = NULL;
19812 return nullptr;
19813 }
19814
19815 if (!spans.init_p ())
19816 /* Before the main file, don't divert. */
19817 return nullptr;
19818
19819 dump.push (NULL);
19820
19821 dump () && dump ("Checking include translation '%s'", path);
19822 auto *mapper = get_mapper (loc: cpp_main_loc (reader), deps: cpp_get_deps (reader));
19823
19824 size_t len = strlen (s: path);
19825 path = canonicalize_header_name (NULL, loc, unquoted: true, str: path, len_r&: len);
19826 auto packet = mapper->IncludeTranslate (str: path, flags: Cody::Flags::None, len);
19827 int xlate = false;
19828 if (packet.GetCode () == Cody::Client::PC_BOOL)
19829 xlate = -int (packet.GetInteger ());
19830 else if (packet.GetCode () == Cody::Client::PC_PATHNAME)
19831 {
19832 /* Record the CMI name for when we do the import. */
19833 module_state *import = get_module (name: build_string (len, path));
19834 import->set_filename (packet);
19835 xlate = +1;
19836 }
19837 else
19838 {
19839 gcc_checking_assert (packet.GetCode () == Cody::Client::PC_ERROR);
19840 error_at (loc, "cannot determine %<#include%> translation of %s: %s",
19841 path, packet.GetString ().c_str ());
19842 }
19843
19844 bool note = false;
19845 if (note_include_translate_yes && xlate > 1)
19846 note = true;
19847 else if (note_include_translate_no && xlate == 0)
19848 note = true;
19849 else if (note_includes)
19850 /* We do not expect the note_includes vector to be large, so O(N)
19851 iteration. */
19852 for (unsigned ix = note_includes->length (); !note && ix--;)
19853 if (!strcmp (s1: (*note_includes)[ix], s2: path))
19854 note = true;
19855
19856 if (note)
19857 inform (loc, xlate
19858 ? G_("include %qs translated to import")
19859 : G_("include %qs processed textually") , path);
19860
19861 dump () && dump (xlate ? "Translating include to import"
19862 : "Keeping include as include");
19863 dump.pop (n: 0);
19864
19865 if (!(xlate > 0))
19866 return nullptr;
19867
19868 /* Create the translation text. */
19869 loc = ordinary_loc_of (lmaps, from: loc);
19870 const line_map_ordinary *map
19871 = linemap_check_ordinary (map: linemap_lookup (lmaps, loc));
19872 unsigned col = SOURCE_COLUMN (ord_map: map, loc);
19873 col -= (col != 0); /* Columns are 1-based. */
19874
19875 unsigned alloc = len + col + 60;
19876 char *res = XNEWVEC (char, alloc);
19877
19878 strcpy (dest: res, src: "__import");
19879 unsigned actual = 8;
19880 if (col > actual)
19881 {
19882 /* Pad out so the filename appears at the same position. */
19883 memset (s: res + actual, c: ' ', n: col - actual);
19884 actual = col;
19885 }
19886 /* No need to encode characters, that's not how header names are
19887 handled. */
19888 actual += snprintf (s: res + actual, maxlen: alloc - actual,
19889 format: "\"%s\" [[__translated]];\n", path);
19890 gcc_checking_assert (actual < alloc);
19891
19892 /* cpplib will delete the buffer. */
19893 return res;
19894}
19895
19896static void
19897begin_header_unit (cpp_reader *reader)
19898{
19899 /* Set the module header name from the main_input_filename. */
19900 const char *main = main_input_filename;
19901 size_t len = strlen (s: main);
19902 main = canonicalize_header_name (NULL, loc: 0, unquoted: true, str: main, len_r&: len);
19903 module_state *module = get_module (name: build_string (len, main));
19904
19905 preprocess_module (module, cpp_main_loc (reader), in_purview: false, is_import: false, export_p: true, reader);
19906}
19907
19908/* We've just properly entered the main source file. I.e. after the
19909 command line, builtins and forced headers. Record the line map and
19910 location of this map. Note we may be called more than once. The
19911 first call sticks. */
19912
19913void
19914module_begin_main_file (cpp_reader *reader, line_maps *lmaps,
19915 const line_map_ordinary *map)
19916{
19917 gcc_checking_assert (lmaps == line_table);
19918 if (modules_p () && !spans.init_p ())
19919 {
19920 unsigned n = dump.push (NULL);
19921 spans.init (lmaps, map);
19922 dump.pop (n);
19923 if (flag_header_unit && !cpp_get_options (reader)->preprocessed)
19924 {
19925 /* Tell the preprocessor this is an include file. */
19926 cpp_retrofit_as_include (reader);
19927 begin_header_unit (reader);
19928 }
19929 }
19930}
19931
19932/* Process the pending_import queue, making sure we know the
19933 filenames. */
19934
19935static void
19936name_pending_imports (cpp_reader *reader)
19937{
19938 auto *mapper = get_mapper (loc: cpp_main_loc (reader), deps: cpp_get_deps (reader));
19939
19940 if (!vec_safe_length (v: pending_imports))
19941 /* Not doing anything. */
19942 return;
19943
19944 timevar_start (TV_MODULE_MAPPER);
19945
19946 auto n = dump.push (NULL);
19947 dump () && dump ("Resolving direct import names");
19948 bool want_deps = (bool (mapper->get_flags () & Cody::Flags::NameOnly)
19949 || cpp_get_deps (reader));
19950 bool any = false;
19951
19952 for (unsigned ix = 0; ix != pending_imports->length (); ix++)
19953 {
19954 module_state *module = (*pending_imports)[ix];
19955 gcc_checking_assert (module->is_direct ());
19956 if (!module->filename && !module->visited_p)
19957 {
19958 bool export_p = (module->module_p
19959 && (module->is_partition () || module->exported_p));
19960
19961 Cody::Flags flags = Cody::Flags::None;
19962 if (flag_preprocess_only
19963 && !(module->is_header () && !export_p))
19964 {
19965 if (!want_deps)
19966 continue;
19967 flags = Cody::Flags::NameOnly;
19968 }
19969
19970 if (!any)
19971 {
19972 any = true;
19973 mapper->Cork ();
19974 }
19975 if (export_p)
19976 mapper->ModuleExport (str: module->get_flatname (), flags);
19977 else
19978 mapper->ModuleImport (str: module->get_flatname (), flags);
19979 module->visited_p = true;
19980 }
19981 }
19982
19983 if (any)
19984 {
19985 auto response = mapper->Uncork ();
19986 auto r_iter = response.begin ();
19987 for (unsigned ix = 0; ix != pending_imports->length (); ix++)
19988 {
19989 module_state *module = (*pending_imports)[ix];
19990 if (module->visited_p)
19991 {
19992 module->visited_p = false;
19993 gcc_checking_assert (!module->filename);
19994
19995 module->set_filename (*r_iter);
19996 ++r_iter;
19997 }
19998 }
19999 }
20000
20001 dump.pop (n);
20002
20003 timevar_stop (TV_MODULE_MAPPER);
20004}
20005
20006/* We've just lexed a module-specific control line for MODULE. Mark
20007 the module as a direct import, and possibly load up its macro
20008 state. Returns the primary module, if this is a module
20009 declaration. */
20010/* Perhaps we should offer a preprocessing mode where we read the
20011 directives from the header unit, rather than require the header's
20012 CMI. */
20013
20014module_state *
20015preprocess_module (module_state *module, location_t from_loc,
20016 bool in_purview, bool is_import, bool is_export,
20017 cpp_reader *reader)
20018{
20019 if (!is_import)
20020 {
20021 if (module->loc)
20022 /* It's already been mentioned, so ignore its module-ness. */
20023 is_import = true;
20024 else
20025 {
20026 /* Record it is the module. */
20027 module->module_p = true;
20028 if (is_export)
20029 {
20030 module->exported_p = true;
20031 module->interface_p = true;
20032 }
20033 }
20034 }
20035
20036 if (module->directness < MD_DIRECT + in_purview)
20037 {
20038 /* Mark as a direct import. */
20039 module->directness = module_directness (MD_DIRECT + in_purview);
20040
20041 /* Set the location to be most informative for users. */
20042 from_loc = ordinary_loc_of (lmaps: line_table, from: from_loc);
20043 if (module->loadedness != ML_NONE)
20044 linemap_module_reparent (line_table, loc: module->loc, new_parent: from_loc);
20045 else
20046 {
20047 module->loc = from_loc;
20048 if (!module->flatname)
20049 module->set_flatname ();
20050 }
20051 }
20052
20053 auto desired = ML_CONFIG;
20054 if (is_import
20055 && module->is_header ()
20056 && (!cpp_get_options (reader)->preprocessed
20057 || cpp_get_options (reader)->directives_only))
20058 /* We need preprocessor state now. */
20059 desired = ML_PREPROCESSOR;
20060
20061 if (!is_import || module->loadedness < desired)
20062 {
20063 vec_safe_push (v&: pending_imports, obj: module);
20064
20065 if (desired == ML_PREPROCESSOR)
20066 {
20067 unsigned n = dump.push (NULL);
20068
20069 dump () && dump ("Reading %M preprocessor state", module);
20070 name_pending_imports (reader);
20071
20072 /* Preserve the state of the line-map. */
20073 unsigned pre_hwm = LINEMAPS_ORDINARY_USED (set: line_table);
20074
20075 /* We only need to close the span, if we're going to emit a
20076 CMI. But that's a little tricky -- our token scanner
20077 needs to be smarter -- and this isn't much state.
20078 Remember, we've not parsed anything at this point, so
20079 our module state flags are inadequate. */
20080 spans.maybe_init ();
20081 spans.close ();
20082
20083 timevar_start (TV_MODULE_IMPORT);
20084
20085 /* Load the config of each pending import -- we must assign
20086 module numbers monotonically. */
20087 for (unsigned ix = 0; ix != pending_imports->length (); ix++)
20088 {
20089 auto *import = (*pending_imports)[ix];
20090 if (!(import->module_p
20091 && (import->is_partition () || import->exported_p))
20092 && import->loadedness == ML_NONE
20093 && (import->is_header () || !flag_preprocess_only))
20094 {
20095 unsigned n = dump.push (m: import);
20096 import->do_import (reader, outermost: true);
20097 dump.pop (n);
20098 }
20099 }
20100 vec_free (v&: pending_imports);
20101
20102 /* Restore the line-map state. */
20103 spans.open (hwm: linemap_module_restore (line_table, lwm: pre_hwm));
20104
20105 /* Now read the preprocessor state of this particular
20106 import. */
20107 if (module->loadedness == ML_CONFIG
20108 && module->read_preprocessor (outermost: true))
20109 module->import_macros ();
20110
20111 timevar_stop (TV_MODULE_IMPORT);
20112
20113 dump.pop (n);
20114 }
20115 }
20116
20117 return is_import ? NULL : get_primary (parent: module);
20118}
20119
20120/* We've completed phase-4 translation. Emit any dependency
20121 information for the not-yet-loaded direct imports, and fill in
20122 their file names. We'll have already loaded up the direct header
20123 unit wavefront. */
20124
20125void
20126preprocessed_module (cpp_reader *reader)
20127{
20128 unsigned n = dump.push (NULL);
20129
20130 dump () && dump ("Completed phase-4 (tokenization) processing");
20131
20132 name_pending_imports (reader);
20133 vec_free (v&: pending_imports);
20134
20135 spans.maybe_init ();
20136 spans.close ();
20137
20138 using iterator = hash_table<module_state_hash>::iterator;
20139 if (mkdeps *deps = cpp_get_deps (reader))
20140 {
20141 /* Walk the module hash, informing the dependency machinery. */
20142 iterator end = modules_hash->end ();
20143 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
20144 {
20145 module_state *module = *iter;
20146
20147 if (module->is_direct ())
20148 {
20149 if (module->is_module ()
20150 && (module->is_interface () || module->is_partition ()))
20151 deps_add_module_target (deps, module: module->get_flatname (),
20152 cmi: maybe_add_cmi_prefix (to: module->filename),
20153 is_header: module->is_header (),
20154 is_exported: module->is_exported ());
20155 else
20156 deps_add_module_dep (deps, module: module->get_flatname ());
20157 }
20158 }
20159 }
20160
20161 if (flag_header_unit && !flag_preprocess_only)
20162 {
20163 /* Find the main module -- remember, it's not yet in the module
20164 array. */
20165 iterator end = modules_hash->end ();
20166 for (iterator iter = modules_hash->begin (); iter != end; ++iter)
20167 {
20168 module_state *module = *iter;
20169 if (module->is_module ())
20170 {
20171 declare_module (module, from_loc: cpp_main_loc (reader), exporting_p: true, NULL, reader);
20172 module_kind |= MK_EXPORTING;
20173 break;
20174 }
20175 }
20176 }
20177
20178 dump.pop (n);
20179}
20180
20181/* VAL is a global tree, add it to the global vec if it is
20182 interesting. Add some of its targets, if they too are
20183 interesting. We do not add identifiers, as they can be re-found
20184 via the identifier hash table. There is a cost to the number of
20185 global trees. */
20186
20187static int
20188maybe_add_global (tree val, unsigned &crc)
20189{
20190 int v = 0;
20191
20192 if (val && !(identifier_p (t: val) || TREE_VISITED (val)))
20193 {
20194 TREE_VISITED (val) = true;
20195 crc = crc32_unsigned (chksum: crc, value: fixed_trees->length ());
20196 vec_safe_push (v&: fixed_trees, obj: val);
20197 v++;
20198
20199 if (CODE_CONTAINS_STRUCT (TREE_CODE (val), TS_TYPED))
20200 v += maybe_add_global (TREE_TYPE (val), crc);
20201 if (CODE_CONTAINS_STRUCT (TREE_CODE (val), TS_TYPE_COMMON))
20202 v += maybe_add_global (TYPE_NAME (val), crc);
20203 }
20204
20205 return v;
20206}
20207
20208/* Initialize module state. Create the hash table, determine the
20209 global trees. Create the module for current TU. */
20210
20211void
20212init_modules (cpp_reader *reader)
20213{
20214 /* PCH should not be reachable because of lang-specs, but the
20215 user could have overriden that. */
20216 if (pch_file)
20217 fatal_error (input_location,
20218 "C++ modules are incompatible with precompiled headers");
20219
20220 if (cpp_get_options (reader)->traditional)
20221 fatal_error (input_location,
20222 "C++ modules are incompatible with traditional preprocessing");
20223
20224 if (flag_preprocess_only)
20225 {
20226 cpp_options *cpp_opts = cpp_get_options (reader);
20227 if (flag_no_output
20228 || (cpp_opts->deps.style != DEPS_NONE
20229 && !cpp_opts->deps.need_preprocessor_output))
20230 {
20231 warning (0, flag_dump_macros == 'M'
20232 ? G_("macro debug output may be incomplete with modules")
20233 : G_("module dependencies require preprocessing"));
20234 if (cpp_opts->deps.style != DEPS_NONE)
20235 inform (input_location, "you should use the %<-%s%> option",
20236 cpp_opts->deps.style == DEPS_SYSTEM ? "MD" : "MMD");
20237 }
20238 }
20239
20240 /* :: is always exported. */
20241 DECL_MODULE_EXPORT_P (global_namespace) = true;
20242
20243 modules_hash = hash_table<module_state_hash>::create_ggc (n: 31);
20244 vec_safe_reserve (v&: modules, nelems: 20);
20245
20246 /* Create module for current TU. */
20247 module_state *current
20248 = new (ggc_alloc<module_state> ()) module_state (NULL_TREE, NULL, false);
20249 current->mod = 0;
20250 bitmap_set_bit (current->imports, 0);
20251 modules->quick_push (obj: current);
20252
20253 gcc_checking_assert (!fixed_trees);
20254
20255 headers = BITMAP_GGC_ALLOC ();
20256
20257 if (note_includes)
20258 /* Canonicalize header names. */
20259 for (unsigned ix = 0; ix != note_includes->length (); ix++)
20260 {
20261 const char *hdr = (*note_includes)[ix];
20262 size_t len = strlen (s: hdr);
20263
20264 bool system = hdr[0] == '<';
20265 bool user = hdr[0] == '"';
20266 bool delimed = system || user;
20267
20268 if (len <= (delimed ? 2 : 0)
20269 || (delimed && hdr[len-1] != (system ? '>' : '"')))
20270 error ("invalid header name %qs", hdr);
20271
20272 hdr = canonicalize_header_name (reader: delimed ? reader : NULL,
20273 loc: 0, unquoted: !delimed, str: hdr, len_r&: len);
20274 char *path = XNEWVEC (char, len + 1);
20275 memcpy (dest: path, src: hdr, n: len);
20276 path[len] = 0;
20277
20278 (*note_includes)[ix] = path;
20279 }
20280
20281 if (note_cmis)
20282 /* Canonicalize & mark module names. */
20283 for (unsigned ix = 0; ix != note_cmis->length (); ix++)
20284 {
20285 const char *name = (*note_cmis)[ix];
20286 size_t len = strlen (s: name);
20287
20288 bool is_system = name[0] == '<';
20289 bool is_user = name[0] == '"';
20290 bool is_pathname = false;
20291 if (!(is_system || is_user))
20292 for (unsigned ix = len; !is_pathname && ix--;)
20293 is_pathname = IS_DIR_SEPARATOR (name[ix]);
20294 if (is_system || is_user || is_pathname)
20295 {
20296 if (len <= (is_pathname ? 0 : 2)
20297 || (!is_pathname && name[len-1] != (is_system ? '>' : '"')))
20298 {
20299 error ("invalid header name %qs", name);
20300 continue;
20301 }
20302 else
20303 name = canonicalize_header_name (reader: is_pathname ? nullptr : reader,
20304 loc: 0, unquoted: is_pathname, str: name, len_r&: len);
20305 }
20306 if (auto module = get_module (ptr: name))
20307 module->inform_cmi_p = 1;
20308 else
20309 error ("invalid module name %qs", name);
20310 }
20311
20312 dump.push (NULL);
20313
20314 /* Determine lazy handle bound. */
20315 {
20316 unsigned limit = 1000;
20317#if HAVE_GETRLIMIT
20318 struct rlimit rlimit;
20319 if (!getrlimit (RLIMIT_NOFILE, rlimits: &rlimit))
20320 {
20321 lazy_hard_limit = (rlimit.rlim_max < 1000000
20322 ? unsigned (rlimit.rlim_max) : 1000000);
20323 lazy_hard_limit = (lazy_hard_limit > LAZY_HEADROOM
20324 ? lazy_hard_limit - LAZY_HEADROOM : 0);
20325 if (rlimit.rlim_cur < limit)
20326 limit = unsigned (rlimit.rlim_cur);
20327 }
20328#endif
20329 limit = limit > LAZY_HEADROOM ? limit - LAZY_HEADROOM : 1;
20330
20331 if (unsigned parm = param_lazy_modules)
20332 {
20333 if (parm <= limit || !lazy_hard_limit || !try_increase_lazy (want: parm))
20334 lazy_limit = parm;
20335 }
20336 else
20337 lazy_limit = limit;
20338 }
20339
20340 if (dump ())
20341 {
20342 verstr_t ver;
20343 version2string (MODULE_VERSION, out&: ver);
20344 dump ("Source: %s", main_input_filename);
20345 dump ("Compiler: %s", version_string);
20346 dump ("Modules: %s", ver);
20347 dump ("Checking: %s",
20348#if CHECKING_P
20349 "checking"
20350#elif ENABLE_ASSERT_CHECKING
20351 "asserting"
20352#else
20353 "release"
20354#endif
20355 );
20356 dump ("Compiled by: "
20357#ifdef __GNUC__
20358 "GCC %d.%d, %s", __GNUC__, __GNUC_MINOR__,
20359#ifdef __OPTIMIZE__
20360 "optimizing"
20361#else
20362 "not optimizing"
20363#endif
20364#else
20365 "not GCC"
20366#endif
20367 );
20368 dump ("Reading: %s", MAPPED_READING ? "mmap" : "fileio");
20369 dump ("Writing: %s", MAPPED_WRITING ? "mmap" : "fileio");
20370 dump ("Lazy limit: %u", lazy_limit);
20371 dump ("Lazy hard limit: %u", lazy_hard_limit);
20372 dump ("");
20373 }
20374
20375 /* Construct the global tree array. This is an array of unique
20376 global trees (& types). Do this now, rather than lazily, as
20377 some global trees are lazily created and we don't want that to
20378 mess with our syndrome of fixed trees. */
20379 unsigned crc = 0;
20380 vec_alloc (v&: fixed_trees, nelems: 250);
20381
20382 dump () && dump ("+Creating globals");
20383 /* Insert the TRANSLATION_UNIT_DECL. */
20384 TREE_VISITED (DECL_CONTEXT (global_namespace)) = true;
20385 fixed_trees->quick_push (DECL_CONTEXT (global_namespace));
20386 for (unsigned jx = 0; global_tree_arys[jx].first; jx++)
20387 {
20388 const tree *ptr = global_tree_arys[jx].first;
20389 unsigned limit = global_tree_arys[jx].second;
20390
20391 for (unsigned ix = 0; ix != limit; ix++, ptr++)
20392 {
20393 !(ix & 31) && dump ("") && dump ("+\t%u:%u:", jx, ix);
20394 unsigned v = maybe_add_global (val: *ptr, crc);
20395 dump () && dump ("+%u", v);
20396 }
20397 }
20398 /* OS- and machine-specific types are dynamically registered at
20399 runtime, so cannot be part of global_tree_arys. */
20400 registered_builtin_types && dump ("") && dump ("+\tB:");
20401 for (tree t = registered_builtin_types; t; t = TREE_CHAIN (t))
20402 {
20403 unsigned v = maybe_add_global (TREE_VALUE (t), crc);
20404 dump () && dump ("+%u", v);
20405 }
20406 global_crc = crc32_unsigned (chksum: crc, value: fixed_trees->length ());
20407 dump ("") && dump ("Created %u unique globals, crc=%x",
20408 fixed_trees->length (), global_crc);
20409 for (unsigned ix = fixed_trees->length (); ix--;)
20410 TREE_VISITED ((*fixed_trees)[ix]) = false;
20411
20412 dump.pop (n: 0);
20413
20414 if (!flag_module_lazy)
20415 /* Get the mapper now, if we're not being lazy. */
20416 get_mapper (loc: cpp_main_loc (reader), deps: cpp_get_deps (reader));
20417
20418 if (!flag_preprocess_only)
20419 {
20420 pending_table = new pending_map_t (EXPERIMENT (1, 400));
20421 entity_map = new entity_map_t (EXPERIMENT (1, 400));
20422 vec_safe_reserve (v&: entity_ary, EXPERIMENT (1, 400));
20423 }
20424
20425#if CHECKING_P
20426 note_defs = note_defs_table_t::create_ggc (n: 1000);
20427#endif
20428
20429 if (flag_header_unit && cpp_get_options (reader)->preprocessed)
20430 begin_header_unit (reader);
20431
20432 /* Collect here to make sure things are tagged correctly (when
20433 aggressively GC'd). */
20434 ggc_collect ();
20435}
20436
20437/* If NODE is a deferred macro, load it. */
20438
20439static int
20440load_macros (cpp_reader *reader, cpp_hashnode *node, void *)
20441{
20442 location_t main_loc
20443 = MAP_START_LOCATION (map: LINEMAPS_ORDINARY_MAP_AT (set: line_table, index: 0));
20444
20445 if (cpp_user_macro_p (node)
20446 && !node->value.macro)
20447 {
20448 cpp_macro *macro = cpp_get_deferred_macro (reader, node, main_loc);
20449 dump () && dump ("Loaded macro #%s %I",
20450 macro ? "define" : "undef", identifier (node));
20451 }
20452
20453 return 1;
20454}
20455
20456/* At the end of tokenizing, we no longer need the macro tables of
20457 imports. But the user might have requested some checking. */
20458
20459void
20460maybe_check_all_macros (cpp_reader *reader)
20461{
20462 if (!warn_imported_macros)
20463 return;
20464
20465 /* Force loading of any remaining deferred macros. This will
20466 produce diagnostics if they are ill-formed. */
20467 unsigned n = dump.push (NULL);
20468 cpp_forall_identifiers (reader, load_macros, NULL);
20469 dump.pop (n);
20470}
20471
20472// State propagated from finish_module_processing to fini_modules
20473
20474struct module_processing_cookie
20475{
20476 elf_out out;
20477 module_state_config config;
20478 char *cmi_name;
20479 char *tmp_name;
20480 unsigned crc;
20481 bool began;
20482
20483 module_processing_cookie (char *cmi, char *tmp, int fd, int e)
20484 : out (fd, e), cmi_name (cmi), tmp_name (tmp), crc (0), began (false)
20485 {
20486 }
20487 ~module_processing_cookie ()
20488 {
20489 XDELETEVEC (tmp_name);
20490 XDELETEVEC (cmi_name);
20491 }
20492};
20493
20494/* Write the CMI, if we're a module interface. */
20495
20496void *
20497finish_module_processing (cpp_reader *reader)
20498{
20499 module_processing_cookie *cookie = nullptr;
20500
20501 if (header_module_p ())
20502 module_kind &= ~MK_EXPORTING;
20503
20504 if (!modules || !(*modules)[0]->name)
20505 {
20506 if (flag_module_only)
20507 warning (0, "%<-fmodule-only%> used for non-interface");
20508 }
20509 else if (!flag_syntax_only)
20510 {
20511 int fd = -1;
20512 int e = -1;
20513
20514 timevar_start (TV_MODULE_EXPORT);
20515
20516 /* Force a valid but empty line map at the end. This simplifies
20517 the line table preparation and writing logic. */
20518 linemap_add (line_table, LC_ENTER, sysp: false, to_file: "", to_line: 0);
20519
20520 /* We write to a tmpname, and then atomically rename. */
20521 char *cmi_name = NULL;
20522 char *tmp_name = NULL;
20523 module_state *state = (*modules)[0];
20524
20525 unsigned n = dump.push (m: state);
20526 state->announce (what: "creating");
20527 if (state->filename)
20528 {
20529 size_t len = 0;
20530 cmi_name = xstrdup (maybe_add_cmi_prefix (to: state->filename, len_p: &len));
20531 tmp_name = XNEWVEC (char, len + 3);
20532 memcpy (dest: tmp_name, src: cmi_name, n: len);
20533 strcpy (dest: &tmp_name[len], src: "~");
20534
20535 if (!errorcount)
20536 for (unsigned again = 2; ; again--)
20537 {
20538 fd = open (file: tmp_name,
20539 O_RDWR | O_CREAT | O_TRUNC | O_CLOEXEC | O_BINARY,
20540 S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP|S_IROTH|S_IWOTH);
20541 e = errno;
20542 if (fd >= 0 || !again || e != ENOENT)
20543 break;
20544 create_dirs (path: tmp_name);
20545 }
20546 if (note_module_cmi_yes || state->inform_cmi_p)
20547 inform (state->loc, "writing CMI %qs", cmi_name);
20548 dump () && dump ("CMI is %s", cmi_name);
20549 }
20550
20551 cookie = new module_processing_cookie (cmi_name, tmp_name, fd, e);
20552
20553 if (errorcount)
20554 warning_at (state->loc, 0, "not writing module %qs due to errors",
20555 state->get_flatname ());
20556 else if (cookie->out.begin ())
20557 {
20558 cookie->began = true;
20559 auto loc = input_location;
20560 /* So crashes finger-point the module decl. */
20561 input_location = state->loc;
20562 state->write_begin (to: &cookie->out, reader, config&: cookie->config, crc&: cookie->crc);
20563 input_location = loc;
20564 }
20565
20566 dump.pop (n);
20567 timevar_stop (TV_MODULE_EXPORT);
20568
20569 ggc_collect ();
20570 }
20571
20572 if (modules)
20573 {
20574 unsigned n = dump.push (NULL);
20575 dump () && dump ("Imported %u modules", modules->length () - 1);
20576 dump () && dump ("Containing %u clusters", available_clusters);
20577 dump () && dump ("Loaded %u clusters (%u%%)", loaded_clusters,
20578 (loaded_clusters * 100 + available_clusters / 2) /
20579 (available_clusters + !available_clusters));
20580 dump.pop (n);
20581 }
20582
20583 return cookie;
20584}
20585
20586// Do the final emission of a module. At this point we know whether
20587// the module static initializer is a NOP or not.
20588
20589static void
20590late_finish_module (cpp_reader *reader, module_processing_cookie *cookie,
20591 bool init_fn_non_empty)
20592{
20593 timevar_start (TV_MODULE_EXPORT);
20594
20595 module_state *state = (*modules)[0];
20596 unsigned n = dump.push (m: state);
20597 state->announce (what: "finishing");
20598
20599 cookie->config.active_init = init_fn_non_empty;
20600 if (cookie->began)
20601 state->write_end (to: &cookie->out, reader, config&: cookie->config, crc&: cookie->crc);
20602
20603 if (cookie->out.end () && cookie->cmi_name)
20604 {
20605 /* Some OS's do not replace NEWNAME if it already exists.
20606 This'll have a race condition in erroneous concurrent
20607 builds. */
20608 unlink (name: cookie->cmi_name);
20609 if (rename (old: cookie->tmp_name, new: cookie->cmi_name))
20610 {
20611 dump () && dump ("Rename ('%s','%s') errno=%u",
20612 cookie->tmp_name, cookie->cmi_name, errno);
20613 cookie->out.set_error (errno);
20614 }
20615 }
20616
20617 if (cookie->out.get_error () && cookie->began)
20618 {
20619 error_at (state->loc, "failed to write compiled module: %s",
20620 cookie->out.get_error (name: state->filename));
20621 state->note_cmi_name ();
20622 }
20623
20624 if (!errorcount)
20625 {
20626 auto *mapper = get_mapper (loc: cpp_main_loc (reader), deps: cpp_get_deps (reader));
20627 mapper->ModuleCompiled (str: state->get_flatname ());
20628 }
20629 else if (cookie->cmi_name)
20630 {
20631 /* We failed, attempt to erase all evidence we even tried. */
20632 unlink (name: cookie->tmp_name);
20633 unlink (name: cookie->cmi_name);
20634 }
20635
20636 delete cookie;
20637 dump.pop (n);
20638 timevar_stop (TV_MODULE_EXPORT);
20639}
20640
20641void
20642fini_modules (cpp_reader *reader, void *cookie, bool has_inits)
20643{
20644 if (cookie)
20645 late_finish_module (reader,
20646 cookie: static_cast<module_processing_cookie *> (cookie),
20647 init_fn_non_empty: has_inits);
20648
20649 /* We're done with the macro tables now. */
20650 vec_free (v&: macro_exports);
20651 vec_free (v&: macro_imports);
20652 headers = NULL;
20653
20654 /* We're now done with everything but the module names. */
20655 set_cmi_repo (NULL);
20656 if (mapper)
20657 {
20658 timevar_start (TV_MODULE_MAPPER);
20659 module_client::close_module_client (loc: 0, mapper);
20660 mapper = nullptr;
20661 timevar_stop (TV_MODULE_MAPPER);
20662 }
20663 module_state_config::release ();
20664
20665#if CHECKING_P
20666 note_defs = NULL;
20667#endif
20668
20669 if (modules)
20670 for (unsigned ix = modules->length (); --ix;)
20671 if (module_state *state = (*modules)[ix])
20672 state->release ();
20673
20674 /* No need to lookup modules anymore. */
20675 modules_hash = NULL;
20676
20677 /* Or entity array. We still need the entity map to find import numbers. */
20678 vec_free (v&: entity_ary);
20679 entity_ary = NULL;
20680
20681 /* Or remember any pending entities. */
20682 delete pending_table;
20683 pending_table = NULL;
20684
20685 /* Or any keys -- Let it go! */
20686 delete keyed_table;
20687 keyed_table = NULL;
20688
20689 /* Allow a GC, we've possibly made much data unreachable. */
20690 ggc_collect ();
20691}
20692
20693/* If CODE is a module option, handle it & return true. Otherwise
20694 return false. For unknown reasons I cannot get the option
20695 generation machinery to set fmodule-mapper or -fmodule-header to
20696 make a string type option variable. */
20697
20698bool
20699handle_module_option (unsigned code, const char *str, int)
20700{
20701 auto hdr = CMS_header;
20702
20703 switch (opt_code (code))
20704 {
20705 case OPT_fmodule_mapper_:
20706 module_mapper_name = str;
20707 return true;
20708
20709 case OPT_fmodule_header_:
20710 {
20711 if (!strcmp (s1: str, s2: "user"))
20712 hdr = CMS_user;
20713 else if (!strcmp (s1: str, s2: "system"))
20714 hdr = CMS_system;
20715 else
20716 error ("unknown header kind %qs", str);
20717 }
20718 /* Fallthrough. */
20719
20720 case OPT_fmodule_header:
20721 flag_header_unit = hdr;
20722 flag_modules = 1;
20723 return true;
20724
20725 case OPT_flang_info_include_translate_:
20726 vec_safe_push (v&: note_includes, obj: str);
20727 return true;
20728
20729 case OPT_flang_info_module_cmi_:
20730 vec_safe_push (v&: note_cmis, obj: str);
20731 return true;
20732
20733 default:
20734 return false;
20735 }
20736}
20737
20738/* Set preprocessor callbacks and options for modules. */
20739
20740void
20741module_preprocess_options (cpp_reader *reader)
20742{
20743 gcc_checking_assert (!lang_hooks.preprocess_undef);
20744 if (modules_p ())
20745 {
20746 auto *cb = cpp_get_callbacks (reader);
20747
20748 cb->translate_include = maybe_translate_include;
20749 cb->user_deferred_macro = module_state::deferred_macro;
20750 if (flag_header_unit)
20751 {
20752 /* If the preprocessor hook is already in use, that
20753 implementation will call the undef langhook. */
20754 if (cb->undef)
20755 lang_hooks.preprocess_undef = module_state::undef_macro;
20756 else
20757 cb->undef = module_state::undef_macro;
20758 }
20759 auto *opt = cpp_get_options (reader);
20760 opt->module_directives = true;
20761 opt->main_search = cpp_main_search (flag_header_unit);
20762 }
20763}
20764
20765#include "gt-cp-module.h"
20766

source code of gcc/cp/module.cc