Jump to content

User:Akashr007/sandbox

From Wikipedia, the free encyclopedia

@

[edit]

I am so confused. This page says that the at sign (@) is part of the C character set, but I can not find any information on what it is used for. Anybody who knows? 5.150.218.51 (talk) 10:13, 14 May 2013 (UTC)

An "@" would be legal in quoted strings, for instance. The point of the comment is that C source uses the POSIX character set (the same as US-ASCII) TEDickey (talk) 10:38, 14 May 2013 (UTC)
Then why does the list not include dollar sign and backtick? 5.150.218.51 (talk) 10:58, 14 May 2013 (UTC)
The ANSI committee purposely limited the character set to a small superset of ISO 646; specifically, starting with the "invariant" ISO 646 subset of ASCII and adding the few additional characters C already used (such as brackets). The backtick (`) and US dollar sign ($) are not part of either ISO 646, nor were they in use by existing C (as of 1988), so that's probably why they were not included. Later when the ANSI proposal became an ISO proposal, trigraphs and later digraphs were added in order to support compilers in environments having only the ISO 646 subset available. Some of this is discussed in section 5.2.1 in the Rationale document, but not these two characters specifically. While it's true that the at-sign (@) is also not part of ISO 646, it could be argued that it is a character required for network protocols (e.g. RFC 822[1]), so perhaps that's why it was included. — Loadmaster (talk) 17:09, 14 May 2013 (UTC)
Section 6.4.3 Universal character names in ISO/IEC 9899:1999 (E) mentions '$'. VAX C for one did allow dollar signs in identifiers (Apollo's C compiler did also, I recall - but I can cite VAX C more readily). TEDickey (talk) 20:32, 14 May 2013 (UTC)
Good answer, thanks. Also, GNU cpp apparently allows dollar signs in identifiers as an option. 5.150.218.51 (talk) 20:12, 15 May 2013 (UTC)
I don't think @ is part of the C character set. That it is "legal within quoted strings" seems not relevant, because there are many other characters that can included within quoted strings, but they are not listed as part of the C character set. And I don't think it matters that @ is a part of network protocols, because that would still relate to @ being in a quoted string, unless I am missing something. Bottom line: what is an example where @ is used as part of C syntax, is not in a quoted string, and compiles? 71.212.102.14 (talk) 03:05, 12 November 2013 (UTC)
I have to agree. Leave it out. Nasnema  Chat  06:06, 12 November 2013 (UTC)
Yes, @ is simply not part of the C character set. The standard is quite explicit about what is. Rwessel (talk) 08:03, 12 November 2013 (UTC)

Hello World Example

[edit]

I feel that the hello word program should return 0 since this program could fail on some compilers. It's also worth noting that in C99 int main() can be left without a return value at which point it defaults to returning 0, see C99

#include <stdio.h>

int main(void)
{
    printf("hello, world\n");
    return 0;
}

— Preceding unsigned comment added by LawrencePJ (talkcontribs)

  • I'd also add that the return-less version should not fail on any C89/99/11 compliant compiler. While the value returned is specified only in C99 and later, the return-less form is valid in C89, but the actual value returned to the OS is indeterminate. From the C89 standard:
2.1.2.2 Hosted environment
(...)
Program termination
A return from the initial call to the main function is equivalent to calling the exit function with the value returned by the main function as its argument. If the main function executes a return that specifies no value, the termination status returned to the host environment is undefined.
Rwessel (talk) 18:15, 29 July 2013 (UTC)
  • Why not just make the main return void? --174.106.183.23 (talk) 05:32, 19 December 2013 (UTC)
Because there are, per the standard, only two allowable forms of main() ("int main(void)" and "int main(int argc, char *argv[])" - or equivalents for the second form). IOW, the only standards conforming versions of main() return int. Section 5.1.2.2.1 of the standard is quoted in the second "Hello world" discussion below, if you're interested. While some implementation might provide/allow a void returning form as an (allowed) extension, support for that is not guaranteed to be universal. Rwessel (talk) 08:30, 19 December 2013 (UTC)

c++ as preprocessor

[edit]

"C++ and Objective-C started as preprocessors for C"-sounds weird, and I cant understand what does it mean "as preprocessors". If its how c++ started, why there is no clue for it in c++ essay? Uziel302 (talk) 22:27, 17 August 2013 (UTC)

The wording is awkward: they were initially preprocessors which read C++/Objective-C source and produced C programs (this was by the way before C was standardized). The C++ topic lacks most of the technical information which would make it interesting, because that is (presumably) covered in the various sources referred to. TEDickey (talk) 22:41, 17 August 2013 (UTC)
The more correct wording is that C++ and Objective C started out as compilers that generated C code (instead of generating assembly code or binary code). While some people did refer to them as "preprocessors", this is misleading and technically incorrect. The first Bell Labs C++ compiler by Stroustrup was named cfront, alluding to the fact that it was a C++ compiler "front end" to the existing C compiler. The way compilers operate is very different from the way preprocessors operate. — Loadmaster (talk) 16:22, 30 September 2013 (UTC)
Eh. It's kind of between the two. C++ Precomprocessorilers don't necessarily change the input at all (except to mangle the function names) and in almost all cases leave most of the source alone. I'm perfectly fine with either term, though, as long as it doesn't thrash back and forth. - Richfife (talk) 19:40, 1 October 2013 (UTC)
Just to be clear, preprocessors do text token manipulations, which is purely lexical processing, but compilers use syntactical and semantic parsing, which includes recognizing data types and managing symbol tables (which add another two or three levels of processing on top of lexical scanning). While the output of both may be pure C, the internal operation of each to generate that output is vastly different. Also, the claim that most of the source code is left alone is simply not true, especially when compiling expressions containing virtual class member function calls, user-defined member operators, object allocations/deallocations, and class vtables, all of which generate fairly complicated intermediate C code. Even the earliest C++ front-end compilers did far more than just mangle names. — Loadmaster (talk) 18:14, 3 October 2013 (UTC)
I don't think the meanings of these terms -- preprocessor and compiler -- are as precise as you make them out to be. Certainly the early C++ engines (for lack of a better term) that generated C did significantly more than the C macro preprocessor, but that doesn't mean it is inappropriate to refer to the C++ engine as a preprocessor of sorts. But the potential confusion with the C preprocessor is concerning.

The term compiler traditionally refers to software that takes high level human-readable source and generates executable code, so these early C++ engines were arguably not compilers, at least not in the traditional sense. You'd have to expand the scope of the term's meaning to apply it to the early engines like cfront. I mean, if the output of a process is something that still requires compiling, the process that produced that output can hardly be called a compiler.

But there should be a way through this semantic minefield that will allow for a clear and unambiguous explanation.

Or this. Cue arguments as to whether C is lower level tier than C++ in 3, 2, 1... - Richfife (talk) 20:27, 3 October 2013 (UTC)

"Hello, world!" program

[edit]

@Tedickey: Can you please provide a reason for reverting my sourced edit and putting that vandalism notice at my talk? Regards. —ШαмıQ @ 11:51, 25 November 2013 (UTC)

Please allow me to advocate a bit... Function main() is supposed to be returning an int, and character # isn't what indicates an instruction to the compiler. The whole source code is just a bunch of instructions to the compiler, while character # denotes a preprocessor directive. — Dsimic (talk) 13:34, 25 November 2013 (UTC)
Oh yes, I knew about that #, but I used words as written in the book. You could have simply corrected that to preprocessor directive instead of removing the whole thing. And regarding the void: The function main is not supposed to return anything here; why specify the data type, int, then? —ШαмıQ @ 14:05, 25 November 2013 (UTC)

Please note that it wasn't me reverting your edit. Regarding the main()'s return type, here's what the ISO C standard says in section 5.1.2.2.1:

The function called at program startup is named main. The implementation declares no prototype for this function. It shall be defined with a return type of int and with no parameters:

int main(void) { /* ... */ }

or with two parameters (referred to here as argc and argv, though any names may be used, as they are local to the function in which they are declared):

int main(int argc, char *argv[]) { /* ... */ }

or equivalent;[1] or in some other implementation-defined manner.

[1] Thus, int can be replaced by a typedef name defined as int, or the type of argv can be written as char ** argv, and so on.

With the specified example, it's all about being more close to the standard. — Dsimic (talk) 14:27, 25 November 2013 (UTC)

Ok, you weren't reverting me. But you advocated for Tedickey, and that was why I said so. Well, I get your point, but don't you think void main(void) will be easier for beginners to follow? Not specifying anything might be easier for beginners to understand than specifying the data type of the return value when the function doesn't return anything. —ШαмıQ @ 15:37, 25 November 2013 (UTC)
No worries, I got your point. Let's see what Tedickey is also going to comment there. Also, please have a look at this discussion, in my opinion that would be a much better option — with return instead of exit, of course. — Dsimic (talk) 16:31, 25 November 2013 (UTC)
While "void main()" may be accepted by some compilers (perhaps even most), it is *not* a valid C program per the standard. A sample of an invalid program, which might not even compile on many systems, would make a poor example. Now whether Hello World is actually a good example is a different question, but given its very long history, and particular association with C, including it seems quite reasonable. But then that leads to the immediate issue of having an obsolete form (the original K&R version listed), which also will have problems on many systems, so supplying a modern version would seem necessary. I'm not sure where the second version came from, but as a general concept I would not be opposed to including an explicit "return 0", although that is certainly not required by the standard, and thus would make this not a "minimal" sample (which is what Hello World is supposed to be). If the second form was included, for example, in K&R2 (I'll check my copy of that later today), then that would be an argument for leaving it as-is. But in no case would "void main()" be acceptable as an example. Rwessel (talk) 17:29, 25 November 2013 (UTC)
K&R2 uses the first form. The second is using the C standard, which has been the form used by consensus for quite a while (changing the well-established example to a spurious one which contradicts the standard is an error). TEDickey (talk) 19:46, 25 November 2013 (UTC)
But for beginners, the program must be simple enough (without going into the details of return values). void would simply let the beginner know that there is no return value... Adhering to the standard, the int specification may seem obscure to a novice. I think sacrificing adherence to the standard for just this program, so that the beginner doesn't need to know the technicalities is not a bad deal.—ШαмıQ @ 20:01, 25 November 2013 (UTC)
Again, an incorrect program seems less than helpful. Rwessel (talk) 22:22, 25 November 2013 (UTC)
Interestingly, Hello World did change in K&R2 - they added the include, but did not alter the definition of main. Of course not specifying a return type in this case means the function implicitly returns an int anyway. Rwessel (talk) 22:22, 25 November 2013 (UTC)
no - for beginners it is best to not confuse them with inaccurate information. If you choose to look into the history of this issue, you will note that there are far more reliable sources citing the standard. TEDickey (talk) 20:44, 25 November 2013 (UTC)
I'd sum it up saying that C is specific (and weird) enough that even beginners simply have to reach the point where understanding unobvious things becomes someone's second nature. Having that in mind, int main() explicitly returning nothing is the least weird thing. :) Also, the whole thing with the return value is already described at the end of "Hello, world" section, so it should be good. — Dsimic (talk) 22:48, 25 November 2013 (UTC)
Well, if that is all you want, let it be. But I found void used in quite a few books and figured out that there should be no problem to make it so here. —ШαмıQ @ 05:26, 26 November 2013 (UTC)
sure - for instance Schildt. Be familiar with your topic before making improvements TEDickey (talk) 09:15, 26 November 2013 (UTC)
(edit conflict) Yes, many books do actually get that wrong. That's more a problem of a large number of bad books on C programming. It's an issue of portability - a specific implementation might accept "void main", and even do what you'd hope with it. An implementation *is* free to include an extension like that. But it won't work everywhere. And since this article is about C in general, and not a specific implementation, using that extension in a sample is a problem. Especially if the description of the sample includes the words "standard-conforming." And if we did use an extension in the sample, we'd then be stuck trying to explain where the sample is actually valid. Consider a similar situation with the character set - while the vast majority of C implementations use ASCII, there are certainly some that don't. Including a sample of code that didn't work on a non-ASCII implementation, especially if it was easily avoidable, would just be wrong. Rwessel (talk) 09:35, 26 November 2013 (UTC)
Very well said, second that. — Dsimic (talk) 16:59, 26 November 2013 (UTC)